00:00:00.001 Started by upstream project "autotest-nightly" build number 4279 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3642 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.045 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.071 Using shallow fetch with depth 1 00:00:00.071 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.071 > git --version # timeout=10 00:00:00.100 > git --version # 'git version 2.39.2' 00:00:00.100 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.140 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.140 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.670 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.680 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.691 Checking out Revision 2fb890043673bc2650cdb1a52838125c51a12f85 (FETCH_HEAD) 00:00:02.691 > git config core.sparsecheckout # timeout=10 00:00:02.701 > git read-tree -mu HEAD # timeout=10 00:00:02.716 > git checkout -f 2fb890043673bc2650cdb1a52838125c51a12f85 # timeout=5 00:00:02.730 Commit message: "jenkins: update TLS certificates" 00:00:02.730 > git rev-list --no-walk 2fb890043673bc2650cdb1a52838125c51a12f85 # timeout=10 00:00:02.809 [Pipeline] Start of Pipeline 00:00:02.820 [Pipeline] library 00:00:02.821 Loading library shm_lib@master 00:00:02.822 Library shm_lib@master is cached. Copying from home. 00:00:02.835 [Pipeline] node 00:00:02.856 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.858 [Pipeline] { 00:00:02.867 [Pipeline] catchError 00:00:02.868 [Pipeline] { 00:00:02.881 [Pipeline] wrap 00:00:02.889 [Pipeline] { 00:00:02.897 [Pipeline] stage 00:00:02.899 [Pipeline] { (Prologue) 00:00:02.917 [Pipeline] echo 00:00:02.918 Node: VM-host-WFP7 00:00:02.926 [Pipeline] cleanWs 00:00:02.936 [WS-CLEANUP] Deleting project workspace... 00:00:02.936 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.943 [WS-CLEANUP] done 00:00:03.115 [Pipeline] setCustomBuildProperty 00:00:03.177 [Pipeline] httpRequest 00:00:03.599 [Pipeline] echo 00:00:03.600 Sorcerer 10.211.164.20 is alive 00:00:03.608 [Pipeline] retry 00:00:03.610 [Pipeline] { 00:00:03.619 [Pipeline] httpRequest 00:00:03.623 HttpMethod: GET 00:00:03.624 URL: http://10.211.164.20/packages/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:03.624 Sending request to url: http://10.211.164.20/packages/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:03.625 Response Code: HTTP/1.1 200 OK 00:00:03.625 Success: Status code 200 is in the accepted range: 200,404 00:00:03.625 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:03.771 [Pipeline] } 00:00:03.783 [Pipeline] // retry 00:00:03.789 [Pipeline] sh 00:00:04.068 + tar --no-same-owner -xf jbp_2fb890043673bc2650cdb1a52838125c51a12f85.tar.gz 00:00:04.082 [Pipeline] httpRequest 00:00:04.440 [Pipeline] echo 00:00:04.442 Sorcerer 10.211.164.20 is alive 00:00:04.450 [Pipeline] retry 00:00:04.452 [Pipeline] { 00:00:04.473 [Pipeline] httpRequest 00:00:04.478 HttpMethod: GET 00:00:04.479 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:04.479 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:04.480 Response Code: HTTP/1.1 200 OK 00:00:04.481 Success: Status code 200 is in the accepted range: 200,404 00:00:04.481 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:21.734 [Pipeline] } 00:00:21.752 [Pipeline] // retry 00:00:21.759 [Pipeline] sh 00:00:22.047 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:24.604 [Pipeline] sh 00:00:24.890 + git -C spdk log --oneline -n5 00:00:24.890 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:24.890 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:24.890 4bcab9fb9 correct kick for CQ full case 00:00:24.890 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:24.890 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:24.911 [Pipeline] writeFile 00:00:24.929 [Pipeline] sh 00:00:25.220 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:25.233 [Pipeline] sh 00:00:25.519 + cat autorun-spdk.conf 00:00:25.519 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.519 SPDK_RUN_ASAN=1 00:00:25.519 SPDK_RUN_UBSAN=1 00:00:25.519 SPDK_TEST_RAID=1 00:00:25.519 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.527 RUN_NIGHTLY=1 00:00:25.529 [Pipeline] } 00:00:25.543 [Pipeline] // stage 00:00:25.558 [Pipeline] stage 00:00:25.560 [Pipeline] { (Run VM) 00:00:25.573 [Pipeline] sh 00:00:25.859 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:25.859 + echo 'Start stage prepare_nvme.sh' 00:00:25.859 Start stage prepare_nvme.sh 00:00:25.859 + [[ -n 1 ]] 00:00:25.859 + disk_prefix=ex1 00:00:25.859 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:25.859 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:25.859 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:25.859 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.859 ++ SPDK_RUN_ASAN=1 00:00:25.859 ++ SPDK_RUN_UBSAN=1 00:00:25.859 ++ SPDK_TEST_RAID=1 00:00:25.859 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.859 ++ RUN_NIGHTLY=1 00:00:25.859 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:25.859 + nvme_files=() 00:00:25.859 + declare -A nvme_files 00:00:25.859 + backend_dir=/var/lib/libvirt/images/backends 00:00:25.859 + nvme_files['nvme.img']=5G 00:00:25.859 + nvme_files['nvme-cmb.img']=5G 00:00:25.859 + nvme_files['nvme-multi0.img']=4G 00:00:25.859 + nvme_files['nvme-multi1.img']=4G 00:00:25.859 + nvme_files['nvme-multi2.img']=4G 00:00:25.859 + nvme_files['nvme-openstack.img']=8G 00:00:25.859 + nvme_files['nvme-zns.img']=5G 00:00:25.859 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:25.859 + (( SPDK_TEST_FTL == 1 )) 00:00:25.859 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:25.859 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:25.859 + for nvme in "${!nvme_files[@]}" 00:00:25.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:25.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.859 + for nvme in "${!nvme_files[@]}" 00:00:25.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:25.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.859 + for nvme in "${!nvme_files[@]}" 00:00:25.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:25.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:25.859 + for nvme in "${!nvme_files[@]}" 00:00:25.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:25.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.859 + for nvme in "${!nvme_files[@]}" 00:00:25.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:25.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.859 + for nvme in "${!nvme_files[@]}" 00:00:25.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:25.860 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.860 + for nvme in "${!nvme_files[@]}" 00:00:25.860 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:26.120 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.120 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:26.120 + echo 'End stage prepare_nvme.sh' 00:00:26.120 End stage prepare_nvme.sh 00:00:26.133 [Pipeline] sh 00:00:26.419 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.419 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:26.419 00:00:26.419 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:26.419 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:26.419 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:26.419 HELP=0 00:00:26.419 DRY_RUN=0 00:00:26.419 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:26.419 NVME_DISKS_TYPE=nvme,nvme, 00:00:26.419 NVME_AUTO_CREATE=0 00:00:26.419 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:26.419 NVME_CMB=,, 00:00:26.419 NVME_PMR=,, 00:00:26.419 NVME_ZNS=,, 00:00:26.419 NVME_MS=,, 00:00:26.419 NVME_FDP=,, 00:00:26.419 SPDK_VAGRANT_DISTRO=fedora39 00:00:26.419 SPDK_VAGRANT_VMCPU=10 00:00:26.419 SPDK_VAGRANT_VMRAM=12288 00:00:26.419 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.419 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:26.419 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.419 SPDK_OPENSTACK_NETWORK=0 00:00:26.419 VAGRANT_PACKAGE_BOX=0 00:00:26.419 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.419 FORCE_DISTRO=true 00:00:26.419 VAGRANT_BOX_VERSION= 00:00:26.419 EXTRA_VAGRANTFILES= 00:00:26.419 NIC_MODEL=virtio 00:00:26.419 00:00:26.419 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:26.419 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:28.326 Bringing machine 'default' up with 'libvirt' provider... 00:00:28.901 ==> default: Creating image (snapshot of base box volume). 00:00:28.901 ==> default: Creating domain with the following settings... 00:00:28.901 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731925734_4405e861186075335e97 00:00:28.901 ==> default: -- Domain type: kvm 00:00:28.901 ==> default: -- Cpus: 10 00:00:28.901 ==> default: -- Feature: acpi 00:00:28.901 ==> default: -- Feature: apic 00:00:28.901 ==> default: -- Feature: pae 00:00:28.901 ==> default: -- Memory: 12288M 00:00:28.901 ==> default: -- Memory Backing: hugepages: 00:00:28.901 ==> default: -- Management MAC: 00:00:28.901 ==> default: -- Loader: 00:00:28.901 ==> default: -- Nvram: 00:00:28.901 ==> default: -- Base box: spdk/fedora39 00:00:28.901 ==> default: -- Storage pool: default 00:00:28.901 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731925734_4405e861186075335e97.img (20G) 00:00:28.901 ==> default: -- Volume Cache: default 00:00:28.901 ==> default: -- Kernel: 00:00:28.901 ==> default: -- Initrd: 00:00:28.901 ==> default: -- Graphics Type: vnc 00:00:28.901 ==> default: -- Graphics Port: -1 00:00:28.901 ==> default: -- Graphics IP: 127.0.0.1 00:00:28.901 ==> default: -- Graphics Password: Not defined 00:00:28.901 ==> default: -- Video Type: cirrus 00:00:28.901 ==> default: -- Video VRAM: 9216 00:00:28.901 ==> default: -- Sound Type: 00:00:28.901 ==> default: -- Keymap: en-us 00:00:28.901 ==> default: -- TPM Path: 00:00:28.901 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:28.901 ==> default: -- Command line args: 00:00:28.901 ==> default: -> value=-device, 00:00:28.901 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:28.901 ==> default: -> value=-drive, 00:00:28.901 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:28.901 ==> default: -> value=-device, 00:00:28.901 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.901 ==> default: -> value=-device, 00:00:28.901 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:28.901 ==> default: -> value=-drive, 00:00:28.901 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:28.901 ==> default: -> value=-device, 00:00:28.901 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.901 ==> default: -> value=-drive, 00:00:28.901 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:28.901 ==> default: -> value=-device, 00:00:28.901 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.901 ==> default: -> value=-drive, 00:00:28.901 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:28.901 ==> default: -> value=-device, 00:00:28.901 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.200 ==> default: Creating shared folders metadata... 00:00:29.200 ==> default: Starting domain. 00:00:30.593 ==> default: Waiting for domain to get an IP address... 00:00:48.708 ==> default: Waiting for SSH to become available... 00:00:48.708 ==> default: Configuring and enabling network interfaces... 00:00:53.992 default: SSH address: 192.168.121.201:22 00:00:53.992 default: SSH username: vagrant 00:00:53.992 default: SSH auth method: private key 00:00:57.291 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:05.423 ==> default: Mounting SSHFS shared folder... 00:01:07.359 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.359 ==> default: Checking Mount.. 00:01:08.742 ==> default: Folder Successfully Mounted! 00:01:08.742 ==> default: Running provisioner: file... 00:01:09.680 default: ~/.gitconfig => .gitconfig 00:01:10.249 00:01:10.249 SUCCESS! 00:01:10.249 00:01:10.249 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:10.249 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:10.249 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:10.249 00:01:10.259 [Pipeline] } 00:01:10.274 [Pipeline] // stage 00:01:10.283 [Pipeline] dir 00:01:10.284 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:10.286 [Pipeline] { 00:01:10.298 [Pipeline] catchError 00:01:10.300 [Pipeline] { 00:01:10.313 [Pipeline] sh 00:01:10.609 + vagrant ssh-config --host vagrant 00:01:10.609 + sed -ne /^Host/,$p 00:01:10.609 + tee ssh_conf 00:01:13.149 Host vagrant 00:01:13.149 HostName 192.168.121.201 00:01:13.149 User vagrant 00:01:13.149 Port 22 00:01:13.149 UserKnownHostsFile /dev/null 00:01:13.149 StrictHostKeyChecking no 00:01:13.149 PasswordAuthentication no 00:01:13.149 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:13.149 IdentitiesOnly yes 00:01:13.149 LogLevel FATAL 00:01:13.149 ForwardAgent yes 00:01:13.149 ForwardX11 yes 00:01:13.149 00:01:13.164 [Pipeline] withEnv 00:01:13.166 [Pipeline] { 00:01:13.179 [Pipeline] sh 00:01:13.465 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:13.465 source /etc/os-release 00:01:13.465 [[ -e /image.version ]] && img=$(< /image.version) 00:01:13.465 # Minimal, systemd-like check. 00:01:13.465 if [[ -e /.dockerenv ]]; then 00:01:13.465 # Clear garbage from the node's name: 00:01:13.465 # agt-er_autotest_547-896 -> autotest_547-896 00:01:13.465 # $HOSTNAME is the actual container id 00:01:13.465 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:13.465 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:13.465 # We can assume this is a mount from a host where container is running, 00:01:13.465 # so fetch its hostname to easily identify the target swarm worker. 00:01:13.465 container="$(< /etc/hostname) ($agent)" 00:01:13.465 else 00:01:13.465 # Fallback 00:01:13.465 container=$agent 00:01:13.465 fi 00:01:13.465 fi 00:01:13.465 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:13.465 00:01:13.737 [Pipeline] } 00:01:13.751 [Pipeline] // withEnv 00:01:13.758 [Pipeline] setCustomBuildProperty 00:01:13.770 [Pipeline] stage 00:01:13.772 [Pipeline] { (Tests) 00:01:13.788 [Pipeline] sh 00:01:14.071 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:14.347 [Pipeline] sh 00:01:14.632 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:14.908 [Pipeline] timeout 00:01:14.909 Timeout set to expire in 1 hr 30 min 00:01:14.911 [Pipeline] { 00:01:14.925 [Pipeline] sh 00:01:15.215 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:15.786 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:15.801 [Pipeline] sh 00:01:16.086 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:16.363 [Pipeline] sh 00:01:16.649 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:16.927 [Pipeline] sh 00:01:17.214 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:17.475 ++ readlink -f spdk_repo 00:01:17.475 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:17.475 + [[ -n /home/vagrant/spdk_repo ]] 00:01:17.475 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:17.475 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:17.475 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:17.475 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:17.475 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:17.475 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:17.475 + cd /home/vagrant/spdk_repo 00:01:17.475 + source /etc/os-release 00:01:17.475 ++ NAME='Fedora Linux' 00:01:17.475 ++ VERSION='39 (Cloud Edition)' 00:01:17.475 ++ ID=fedora 00:01:17.475 ++ VERSION_ID=39 00:01:17.475 ++ VERSION_CODENAME= 00:01:17.475 ++ PLATFORM_ID=platform:f39 00:01:17.475 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.475 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.475 ++ LOGO=fedora-logo-icon 00:01:17.475 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.475 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.475 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.475 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.475 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.475 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.475 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.475 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.475 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.475 ++ SUPPORT_END=2024-11-12 00:01:17.475 ++ VARIANT='Cloud Edition' 00:01:17.475 ++ VARIANT_ID=cloud 00:01:17.475 + uname -a 00:01:17.475 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:17.475 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:18.046 Hugepages 00:01:18.046 node hugesize free / total 00:01:18.046 node0 1048576kB 0 / 0 00:01:18.046 node0 2048kB 0 / 0 00:01:18.046 00:01:18.046 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.046 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:18.046 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:18.046 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:18.046 + rm -f /tmp/spdk-ld-path 00:01:18.046 + source autorun-spdk.conf 00:01:18.046 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.046 ++ SPDK_RUN_ASAN=1 00:01:18.046 ++ SPDK_RUN_UBSAN=1 00:01:18.046 ++ SPDK_TEST_RAID=1 00:01:18.046 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.046 ++ RUN_NIGHTLY=1 00:01:18.046 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.046 + [[ -n '' ]] 00:01:18.046 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:18.307 + for M in /var/spdk/build-*-manifest.txt 00:01:18.307 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.307 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.307 + for M in /var/spdk/build-*-manifest.txt 00:01:18.307 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.307 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.307 + for M in /var/spdk/build-*-manifest.txt 00:01:18.307 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.307 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.307 ++ uname 00:01:18.307 + [[ Linux == \L\i\n\u\x ]] 00:01:18.307 + sudo dmesg -T 00:01:18.307 + sudo dmesg --clear 00:01:18.307 + dmesg_pid=5425 00:01:18.307 + [[ Fedora Linux == FreeBSD ]] 00:01:18.307 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.307 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.307 + sudo dmesg -Tw 00:01:18.307 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.307 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.307 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.307 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.307 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.307 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.307 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.307 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.307 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.307 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.307 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.307 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.307 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:18.568 10:29:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:18.568 10:29:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:18.568 10:29:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.568 10:29:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:18.568 10:29:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:18.568 10:29:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:18.568 10:29:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.568 10:29:44 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:18.568 10:29:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:18.568 10:29:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:18.568 10:29:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:18.568 10:29:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:18.568 10:29:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:18.568 10:29:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.568 10:29:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.568 10:29:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.568 10:29:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.568 10:29:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.568 10:29:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.568 10:29:44 -- paths/export.sh@5 -- $ export PATH 00:01:18.568 10:29:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.568 10:29:44 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:18.568 10:29:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:18.568 10:29:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731925784.XXXXXX 00:01:18.568 10:29:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731925784.nRR0g9 00:01:18.568 10:29:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:18.568 10:29:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:18.568 10:29:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:18.568 10:29:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:18.568 10:29:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.568 10:29:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:18.568 10:29:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:18.568 10:29:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.568 10:29:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:18.568 10:29:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:18.568 10:29:44 -- pm/common@17 -- $ local monitor 00:01:18.568 10:29:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.568 10:29:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.568 10:29:44 -- pm/common@25 -- $ sleep 1 00:01:18.568 10:29:44 -- pm/common@21 -- $ date +%s 00:01:18.568 10:29:44 -- pm/common@21 -- $ date +%s 00:01:18.568 10:29:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731925784 00:01:18.569 10:29:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731925784 00:01:18.569 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731925784_collect-vmstat.pm.log 00:01:18.569 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731925784_collect-cpu-load.pm.log 00:01:19.511 10:29:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:19.511 10:29:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.511 10:29:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.511 10:29:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:19.511 10:29:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.511 Mon Nov 18 10:29:45 AM UTC 2024 00:01:19.511 10:29:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.511 v25.01-pre-189-g83e8405e4 00:01:19.511 10:29:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:19.511 10:29:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:19.511 10:29:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:19.511 10:29:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:19.511 10:29:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.511 ************************************ 00:01:19.511 START TEST asan 00:01:19.511 ************************************ 00:01:19.511 using asan 00:01:19.511 10:29:45 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:19.511 00:01:19.511 real 0m0.001s 00:01:19.511 user 0m0.000s 00:01:19.511 sys 0m0.000s 00:01:19.511 10:29:45 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:19.511 10:29:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.511 ************************************ 00:01:19.511 END TEST asan 00:01:19.511 ************************************ 00:01:19.772 10:29:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.772 10:29:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.772 10:29:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:19.772 10:29:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:19.772 10:29:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.772 ************************************ 00:01:19.772 START TEST ubsan 00:01:19.772 ************************************ 00:01:19.772 using ubsan 00:01:19.772 10:29:45 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:19.772 00:01:19.772 real 0m0.001s 00:01:19.772 user 0m0.000s 00:01:19.772 sys 0m0.001s 00:01:19.772 10:29:45 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:19.772 ************************************ 00:01:19.772 END TEST ubsan 00:01:19.772 ************************************ 00:01:19.772 10:29:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.772 10:29:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.772 10:29:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.772 10:29:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.772 10:29:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.772 10:29:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.772 10:29:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.772 10:29:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.772 10:29:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.772 10:29:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:20.033 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:20.033 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:20.602 Using 'verbs' RDMA provider 00:01:36.439 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.551 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.551 Creating mk/config.mk...done. 00:01:54.551 Creating mk/cc.flags.mk...done. 00:01:54.551 Type 'make' to build. 00:01:54.551 10:30:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:54.551 10:30:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.551 10:30:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.551 10:30:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.551 ************************************ 00:01:54.551 START TEST make 00:01:54.551 ************************************ 00:01:54.551 10:30:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:54.551 make[1]: Nothing to be done for 'all'. 00:02:02.683 The Meson build system 00:02:02.683 Version: 1.5.0 00:02:02.684 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:02.684 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:02.684 Build type: native build 00:02:02.684 Program cat found: YES (/usr/bin/cat) 00:02:02.684 Project name: DPDK 00:02:02.684 Project version: 24.03.0 00:02:02.684 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.684 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.684 Host machine cpu family: x86_64 00:02:02.684 Host machine cpu: x86_64 00:02:02.684 Message: ## Building in Developer Mode ## 00:02:02.684 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.684 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.684 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.684 Program python3 found: YES (/usr/bin/python3) 00:02:02.684 Program cat found: YES (/usr/bin/cat) 00:02:02.684 Compiler for C supports arguments -march=native: YES 00:02:02.684 Checking for size of "void *" : 8 00:02:02.684 Checking for size of "void *" : 8 (cached) 00:02:02.684 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.684 Library m found: YES 00:02:02.684 Library numa found: YES 00:02:02.684 Has header "numaif.h" : YES 00:02:02.684 Library fdt found: NO 00:02:02.684 Library execinfo found: NO 00:02:02.684 Has header "execinfo.h" : YES 00:02:02.684 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.684 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.684 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.684 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.684 Run-time dependency openssl found: YES 3.1.1 00:02:02.684 Run-time dependency libpcap found: YES 1.10.4 00:02:02.684 Has header "pcap.h" with dependency libpcap: YES 00:02:02.684 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.685 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.685 Compiler for C supports arguments -Wformat: YES 00:02:02.685 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.685 Compiler for C supports arguments -Wformat-security: NO 00:02:02.685 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.685 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.685 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.685 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.685 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.685 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.685 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.685 Compiler for C supports arguments -Wundef: YES 00:02:02.685 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.685 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.685 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.685 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.685 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.685 Program objdump found: YES (/usr/bin/objdump) 00:02:02.685 Compiler for C supports arguments -mavx512f: YES 00:02:02.685 Checking if "AVX512 checking" compiles: YES 00:02:02.685 Fetching value of define "__SSE4_2__" : 1 00:02:02.685 Fetching value of define "__AES__" : 1 00:02:02.685 Fetching value of define "__AVX__" : 1 00:02:02.685 Fetching value of define "__AVX2__" : 1 00:02:02.685 Fetching value of define "__AVX512BW__" : 1 00:02:02.685 Fetching value of define "__AVX512CD__" : 1 00:02:02.685 Fetching value of define "__AVX512DQ__" : 1 00:02:02.685 Fetching value of define "__AVX512F__" : 1 00:02:02.685 Fetching value of define "__AVX512VL__" : 1 00:02:02.685 Fetching value of define "__PCLMUL__" : 1 00:02:02.685 Fetching value of define "__RDRND__" : 1 00:02:02.685 Fetching value of define "__RDSEED__" : 1 00:02:02.685 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.685 Fetching value of define "__znver1__" : (undefined) 00:02:02.686 Fetching value of define "__znver2__" : (undefined) 00:02:02.686 Fetching value of define "__znver3__" : (undefined) 00:02:02.686 Fetching value of define "__znver4__" : (undefined) 00:02:02.686 Library asan found: YES 00:02:02.686 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.686 Message: lib/log: Defining dependency "log" 00:02:02.686 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.686 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.686 Library rt found: YES 00:02:02.686 Checking for function "getentropy" : NO 00:02:02.686 Message: lib/eal: Defining dependency "eal" 00:02:02.686 Message: lib/ring: Defining dependency "ring" 00:02:02.686 Message: lib/rcu: Defining dependency "rcu" 00:02:02.686 Message: lib/mempool: Defining dependency "mempool" 00:02:02.686 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.686 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.686 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.686 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.686 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.686 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.686 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.686 Compiler for C supports arguments -mpclmul: YES 00:02:02.686 Compiler for C supports arguments -maes: YES 00:02:02.686 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.686 Compiler for C supports arguments -mavx512bw: YES 00:02:02.686 Compiler for C supports arguments -mavx512dq: YES 00:02:02.686 Compiler for C supports arguments -mavx512vl: YES 00:02:02.686 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.686 Compiler for C supports arguments -mavx2: YES 00:02:02.686 Compiler for C supports arguments -mavx: YES 00:02:02.686 Message: lib/net: Defining dependency "net" 00:02:02.686 Message: lib/meter: Defining dependency "meter" 00:02:02.686 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.686 Message: lib/pci: Defining dependency "pci" 00:02:02.686 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.686 Message: lib/hash: Defining dependency "hash" 00:02:02.686 Message: lib/timer: Defining dependency "timer" 00:02:02.686 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.686 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.686 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.686 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.686 Message: lib/power: Defining dependency "power" 00:02:02.686 Message: lib/reorder: Defining dependency "reorder" 00:02:02.686 Message: lib/security: Defining dependency "security" 00:02:02.686 Has header "linux/userfaultfd.h" : YES 00:02:02.686 Has header "linux/vduse.h" : YES 00:02:02.686 Message: lib/vhost: Defining dependency "vhost" 00:02:02.686 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.686 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.686 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.686 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.686 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.686 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.686 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.686 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.686 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.686 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.687 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.687 Configuring doxy-api-html.conf using configuration 00:02:02.687 Configuring doxy-api-man.conf using configuration 00:02:02.687 Program mandb found: YES (/usr/bin/mandb) 00:02:02.687 Program sphinx-build found: NO 00:02:02.687 Configuring rte_build_config.h using configuration 00:02:02.687 Message: 00:02:02.687 ================= 00:02:02.687 Applications Enabled 00:02:02.687 ================= 00:02:02.687 00:02:02.687 apps: 00:02:02.687 00:02:02.687 00:02:02.687 Message: 00:02:02.687 ================= 00:02:02.687 Libraries Enabled 00:02:02.687 ================= 00:02:02.687 00:02:02.687 libs: 00:02:02.687 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.687 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.687 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.687 00:02:02.687 Message: 00:02:02.687 =============== 00:02:02.687 Drivers Enabled 00:02:02.687 =============== 00:02:02.687 00:02:02.687 common: 00:02:02.687 00:02:02.687 bus: 00:02:02.687 pci, vdev, 00:02:02.687 mempool: 00:02:02.687 ring, 00:02:02.687 dma: 00:02:02.687 00:02:02.687 net: 00:02:02.687 00:02:02.687 crypto: 00:02:02.687 00:02:02.687 compress: 00:02:02.687 00:02:02.687 vdpa: 00:02:02.687 00:02:02.687 00:02:02.687 Message: 00:02:02.687 ================= 00:02:02.687 Content Skipped 00:02:02.687 ================= 00:02:02.687 00:02:02.688 apps: 00:02:02.688 dumpcap: explicitly disabled via build config 00:02:02.688 graph: explicitly disabled via build config 00:02:02.688 pdump: explicitly disabled via build config 00:02:02.688 proc-info: explicitly disabled via build config 00:02:02.688 test-acl: explicitly disabled via build config 00:02:02.688 test-bbdev: explicitly disabled via build config 00:02:02.688 test-cmdline: explicitly disabled via build config 00:02:02.688 test-compress-perf: explicitly disabled via build config 00:02:02.688 test-crypto-perf: explicitly disabled via build config 00:02:02.688 test-dma-perf: explicitly disabled via build config 00:02:02.688 test-eventdev: explicitly disabled via build config 00:02:02.688 test-fib: explicitly disabled via build config 00:02:02.688 test-flow-perf: explicitly disabled via build config 00:02:02.688 test-gpudev: explicitly disabled via build config 00:02:02.688 test-mldev: explicitly disabled via build config 00:02:02.688 test-pipeline: explicitly disabled via build config 00:02:02.688 test-pmd: explicitly disabled via build config 00:02:02.688 test-regex: explicitly disabled via build config 00:02:02.688 test-sad: explicitly disabled via build config 00:02:02.688 test-security-perf: explicitly disabled via build config 00:02:02.688 00:02:02.688 libs: 00:02:02.688 argparse: explicitly disabled via build config 00:02:02.688 metrics: explicitly disabled via build config 00:02:02.688 acl: explicitly disabled via build config 00:02:02.688 bbdev: explicitly disabled via build config 00:02:02.688 bitratestats: explicitly disabled via build config 00:02:02.688 bpf: explicitly disabled via build config 00:02:02.688 cfgfile: explicitly disabled via build config 00:02:02.688 distributor: explicitly disabled via build config 00:02:02.688 efd: explicitly disabled via build config 00:02:02.688 eventdev: explicitly disabled via build config 00:02:02.688 dispatcher: explicitly disabled via build config 00:02:02.688 gpudev: explicitly disabled via build config 00:02:02.688 gro: explicitly disabled via build config 00:02:02.688 gso: explicitly disabled via build config 00:02:02.688 ip_frag: explicitly disabled via build config 00:02:02.688 jobstats: explicitly disabled via build config 00:02:02.688 latencystats: explicitly disabled via build config 00:02:02.688 lpm: explicitly disabled via build config 00:02:02.688 member: explicitly disabled via build config 00:02:02.689 pcapng: explicitly disabled via build config 00:02:02.689 rawdev: explicitly disabled via build config 00:02:02.689 regexdev: explicitly disabled via build config 00:02:02.689 mldev: explicitly disabled via build config 00:02:02.689 rib: explicitly disabled via build config 00:02:02.689 sched: explicitly disabled via build config 00:02:02.689 stack: explicitly disabled via build config 00:02:02.689 ipsec: explicitly disabled via build config 00:02:02.689 pdcp: explicitly disabled via build config 00:02:02.689 fib: explicitly disabled via build config 00:02:02.689 port: explicitly disabled via build config 00:02:02.689 pdump: explicitly disabled via build config 00:02:02.689 table: explicitly disabled via build config 00:02:02.689 pipeline: explicitly disabled via build config 00:02:02.689 graph: explicitly disabled via build config 00:02:02.689 node: explicitly disabled via build config 00:02:02.689 00:02:02.689 drivers: 00:02:02.689 common/cpt: not in enabled drivers build config 00:02:02.689 common/dpaax: not in enabled drivers build config 00:02:02.689 common/iavf: not in enabled drivers build config 00:02:02.689 common/idpf: not in enabled drivers build config 00:02:02.689 common/ionic: not in enabled drivers build config 00:02:02.689 common/mvep: not in enabled drivers build config 00:02:02.689 common/octeontx: not in enabled drivers build config 00:02:02.689 bus/auxiliary: not in enabled drivers build config 00:02:02.689 bus/cdx: not in enabled drivers build config 00:02:02.689 bus/dpaa: not in enabled drivers build config 00:02:02.689 bus/fslmc: not in enabled drivers build config 00:02:02.690 bus/ifpga: not in enabled drivers build config 00:02:02.690 bus/platform: not in enabled drivers build config 00:02:02.690 bus/uacce: not in enabled drivers build config 00:02:02.690 bus/vmbus: not in enabled drivers build config 00:02:02.690 common/cnxk: not in enabled drivers build config 00:02:02.690 common/mlx5: not in enabled drivers build config 00:02:02.690 common/nfp: not in enabled drivers build config 00:02:02.690 common/nitrox: not in enabled drivers build config 00:02:02.690 common/qat: not in enabled drivers build config 00:02:02.690 common/sfc_efx: not in enabled drivers build config 00:02:02.690 mempool/bucket: not in enabled drivers build config 00:02:02.690 mempool/cnxk: not in enabled drivers build config 00:02:02.690 mempool/dpaa: not in enabled drivers build config 00:02:02.690 mempool/dpaa2: not in enabled drivers build config 00:02:02.690 mempool/octeontx: not in enabled drivers build config 00:02:02.690 mempool/stack: not in enabled drivers build config 00:02:02.690 dma/cnxk: not in enabled drivers build config 00:02:02.690 dma/dpaa: not in enabled drivers build config 00:02:02.690 dma/dpaa2: not in enabled drivers build config 00:02:02.690 dma/hisilicon: not in enabled drivers build config 00:02:02.690 dma/idxd: not in enabled drivers build config 00:02:02.690 dma/ioat: not in enabled drivers build config 00:02:02.690 dma/skeleton: not in enabled drivers build config 00:02:02.690 net/af_packet: not in enabled drivers build config 00:02:02.690 net/af_xdp: not in enabled drivers build config 00:02:02.690 net/ark: not in enabled drivers build config 00:02:02.690 net/atlantic: not in enabled drivers build config 00:02:02.690 net/avp: not in enabled drivers build config 00:02:02.690 net/axgbe: not in enabled drivers build config 00:02:02.690 net/bnx2x: not in enabled drivers build config 00:02:02.690 net/bnxt: not in enabled drivers build config 00:02:02.690 net/bonding: not in enabled drivers build config 00:02:02.690 net/cnxk: not in enabled drivers build config 00:02:02.690 net/cpfl: not in enabled drivers build config 00:02:02.690 net/cxgbe: not in enabled drivers build config 00:02:02.690 net/dpaa: not in enabled drivers build config 00:02:02.690 net/dpaa2: not in enabled drivers build config 00:02:02.690 net/e1000: not in enabled drivers build config 00:02:02.690 net/ena: not in enabled drivers build config 00:02:02.690 net/enetc: not in enabled drivers build config 00:02:02.690 net/enetfec: not in enabled drivers build config 00:02:02.690 net/enic: not in enabled drivers build config 00:02:02.690 net/failsafe: not in enabled drivers build config 00:02:02.690 net/fm10k: not in enabled drivers build config 00:02:02.690 net/gve: not in enabled drivers build config 00:02:02.690 net/hinic: not in enabled drivers build config 00:02:02.690 net/hns3: not in enabled drivers build config 00:02:02.690 net/i40e: not in enabled drivers build config 00:02:02.690 net/iavf: not in enabled drivers build config 00:02:02.690 net/ice: not in enabled drivers build config 00:02:02.690 net/idpf: not in enabled drivers build config 00:02:02.690 net/igc: not in enabled drivers build config 00:02:02.690 net/ionic: not in enabled drivers build config 00:02:02.690 net/ipn3ke: not in enabled drivers build config 00:02:02.690 net/ixgbe: not in enabled drivers build config 00:02:02.690 net/mana: not in enabled drivers build config 00:02:02.690 net/memif: not in enabled drivers build config 00:02:02.690 net/mlx4: not in enabled drivers build config 00:02:02.690 net/mlx5: not in enabled drivers build config 00:02:02.690 net/mvneta: not in enabled drivers build config 00:02:02.690 net/mvpp2: not in enabled drivers build config 00:02:02.690 net/netvsc: not in enabled drivers build config 00:02:02.690 net/nfb: not in enabled drivers build config 00:02:02.690 net/nfp: not in enabled drivers build config 00:02:02.690 net/ngbe: not in enabled drivers build config 00:02:02.690 net/null: not in enabled drivers build config 00:02:02.690 net/octeontx: not in enabled drivers build config 00:02:02.690 net/octeon_ep: not in enabled drivers build config 00:02:02.690 net/pcap: not in enabled drivers build config 00:02:02.690 net/pfe: not in enabled drivers build config 00:02:02.690 net/qede: not in enabled drivers build config 00:02:02.691 net/ring: not in enabled drivers build config 00:02:02.691 net/sfc: not in enabled drivers build config 00:02:02.691 net/softnic: not in enabled drivers build config 00:02:02.691 net/tap: not in enabled drivers build config 00:02:02.691 net/thunderx: not in enabled drivers build config 00:02:02.691 net/txgbe: not in enabled drivers build config 00:02:02.691 net/vdev_netvsc: not in enabled drivers build config 00:02:02.691 net/vhost: not in enabled drivers build config 00:02:02.691 net/virtio: not in enabled drivers build config 00:02:02.691 net/vmxnet3: not in enabled drivers build config 00:02:02.691 raw/*: missing internal dependency, "rawdev" 00:02:02.691 crypto/armv8: not in enabled drivers build config 00:02:02.691 crypto/bcmfs: not in enabled drivers build config 00:02:02.691 crypto/caam_jr: not in enabled drivers build config 00:02:02.691 crypto/ccp: not in enabled drivers build config 00:02:02.691 crypto/cnxk: not in enabled drivers build config 00:02:02.691 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.691 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.691 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.691 crypto/mlx5: not in enabled drivers build config 00:02:02.691 crypto/mvsam: not in enabled drivers build config 00:02:02.691 crypto/nitrox: not in enabled drivers build config 00:02:02.691 crypto/null: not in enabled drivers build config 00:02:02.691 crypto/octeontx: not in enabled drivers build config 00:02:02.691 crypto/openssl: not in enabled drivers build config 00:02:02.691 crypto/scheduler: not in enabled drivers build config 00:02:02.691 crypto/uadk: not in enabled drivers build config 00:02:02.691 crypto/virtio: not in enabled drivers build config 00:02:02.691 compress/isal: not in enabled drivers build config 00:02:02.691 compress/mlx5: not in enabled drivers build config 00:02:02.691 compress/nitrox: not in enabled drivers build config 00:02:02.691 compress/octeontx: not in enabled drivers build config 00:02:02.691 compress/zlib: not in enabled drivers build config 00:02:02.691 regex/*: missing internal dependency, "regexdev" 00:02:02.691 ml/*: missing internal dependency, "mldev" 00:02:02.691 vdpa/ifc: not in enabled drivers build config 00:02:02.691 vdpa/mlx5: not in enabled drivers build config 00:02:02.691 vdpa/nfp: not in enabled drivers build config 00:02:02.691 vdpa/sfc: not in enabled drivers build config 00:02:02.691 event/*: missing internal dependency, "eventdev" 00:02:02.691 baseband/*: missing internal dependency, "bbdev" 00:02:02.691 gpu/*: missing internal dependency, "gpudev" 00:02:02.691 00:02:02.691 00:02:02.691 Build targets in project: 85 00:02:02.691 00:02:02.691 DPDK 24.03.0 00:02:02.691 00:02:02.691 User defined options 00:02:02.691 buildtype : debug 00:02:02.691 default_library : shared 00:02:02.691 libdir : lib 00:02:02.691 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.691 b_sanitize : address 00:02:02.691 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.691 c_link_args : 00:02:02.691 cpu_instruction_set: native 00:02:02.691 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:02.691 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:02.691 enable_docs : false 00:02:02.691 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.691 enable_kmods : false 00:02:02.691 max_lcores : 128 00:02:02.691 tests : false 00:02:02.691 00:02:02.691 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.957 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:03.218 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.218 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.218 [3/268] Linking static target lib/librte_log.a 00:02:03.218 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.218 [5/268] Linking static target lib/librte_kvargs.a 00:02:03.218 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.476 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.476 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.735 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.735 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.735 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.735 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.735 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.735 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.735 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.735 [16/268] Linking static target lib/librte_telemetry.a 00:02:03.735 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.993 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.994 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.252 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.252 [21/268] Linking target lib/librte_log.so.24.1 00:02:04.252 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.252 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.252 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.252 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.252 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.252 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.252 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.510 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.510 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.510 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.510 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.510 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:04.510 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.768 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.768 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.768 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.768 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.768 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.768 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.026 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.026 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.026 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.026 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.026 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.026 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.284 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.284 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.284 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.543 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.543 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.543 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.543 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.543 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.543 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.801 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.801 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.801 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.801 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.801 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.059 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.059 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.059 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.059 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.059 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.059 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.317 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.317 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.576 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.576 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.576 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.576 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.576 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.576 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.576 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.576 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.839 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.839 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.839 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.839 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.839 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.098 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:07.098 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.098 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.356 [85/268] Linking static target lib/librte_eal.a 00:02:07.356 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.356 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.356 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.356 [89/268] Linking static target lib/librte_ring.a 00:02:07.356 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.614 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.614 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.614 [93/268] Linking static target lib/librte_mempool.a 00:02:07.614 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.872 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.872 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.872 [97/268] Linking static target lib/librte_rcu.a 00:02:07.872 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.872 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.873 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.131 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.131 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.131 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.131 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.131 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.389 [106/268] Linking static target lib/librte_net.a 00:02:08.389 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.389 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.389 [109/268] Linking static target lib/librte_mbuf.a 00:02:08.389 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.389 [111/268] Linking static target lib/librte_meter.a 00:02:08.647 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.647 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.647 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.647 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.647 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.647 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.906 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.164 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.164 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.164 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.422 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.422 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.422 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.680 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.680 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.680 [127/268] Linking static target lib/librte_pci.a 00:02:09.680 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.680 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.680 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.680 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.680 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.680 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.938 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.938 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.938 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.938 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.938 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.938 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.938 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.938 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.938 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.938 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.938 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:10.196 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.196 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.196 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.454 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.454 [149/268] Linking static target lib/librte_cmdline.a 00:02:10.712 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.712 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.712 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.712 [153/268] Linking static target lib/librte_timer.a 00:02:10.712 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.712 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.970 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.970 [157/268] Linking static target lib/librte_compressdev.a 00:02:11.228 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.228 [159/268] Linking static target lib/librte_ethdev.a 00:02:11.228 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.228 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.228 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.228 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.487 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.487 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.487 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.487 [167/268] Linking static target lib/librte_hash.a 00:02:11.745 [168/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.745 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.745 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.745 [171/268] Linking static target lib/librte_dmadev.a 00:02:11.745 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.745 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.745 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.003 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.003 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.003 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.261 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.261 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.261 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.519 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.519 [182/268] Linking static target lib/librte_power.a 00:02:12.519 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.519 [184/268] Linking static target lib/librte_cryptodev.a 00:02:12.519 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.519 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.777 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.777 [188/268] Linking static target lib/librte_reorder.a 00:02:12.777 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.777 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.777 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.777 [192/268] Linking static target lib/librte_security.a 00:02:13.035 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.294 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.556 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.556 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.556 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.556 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.556 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.556 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.123 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.123 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.123 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.123 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.123 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.384 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.384 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.384 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:14.384 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:14.384 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:14.648 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:14.648 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:14.648 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.648 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.648 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.648 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:14.648 [217/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.648 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.648 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:14.648 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.648 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.906 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.906 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.906 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.906 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.906 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.164 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.098 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.999 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.999 [230/268] Linking target lib/librte_eal.so.24.1 00:02:17.999 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.999 [232/268] Linking target lib/librte_meter.so.24.1 00:02:17.999 [233/268] Linking target lib/librte_ring.so.24.1 00:02:18.000 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:18.000 [235/268] Linking target lib/librte_pci.so.24.1 00:02:18.000 [236/268] Linking target lib/librte_timer.so.24.1 00:02:18.000 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.258 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.258 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.258 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.258 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:18.258 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:18.258 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.258 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:18.258 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:18.258 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.258 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.516 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.516 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.516 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.516 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:18.516 [252/268] Linking target lib/librte_net.so.24.1 00:02:18.516 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:18.516 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:18.774 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.774 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.774 [257/268] Linking target lib/librte_hash.so.24.1 00:02:18.774 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:18.774 [259/268] Linking target lib/librte_security.so.24.1 00:02:19.033 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:19.968 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.968 [262/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.968 [263/268] Linking static target lib/librte_vhost.a 00:02:20.226 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:20.226 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.484 [266/268] Linking target lib/librte_power.so.24.1 00:02:22.389 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.648 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:22.648 INFO: autodetecting backend as ninja 00:02:22.648 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:40.750 CC lib/log/log.o 00:02:40.750 CC lib/log/log_flags.o 00:02:40.750 CC lib/ut/ut.o 00:02:40.750 CC lib/log/log_deprecated.o 00:02:40.750 CC lib/ut_mock/mock.o 00:02:40.750 LIB libspdk_ut.a 00:02:40.750 LIB libspdk_ut_mock.a 00:02:40.750 LIB libspdk_log.a 00:02:40.750 SO libspdk_ut.so.2.0 00:02:40.750 SO libspdk_ut_mock.so.6.0 00:02:40.750 SO libspdk_log.so.7.1 00:02:40.750 SYMLINK libspdk_ut.so 00:02:40.750 SYMLINK libspdk_ut_mock.so 00:02:40.750 SYMLINK libspdk_log.so 00:02:40.750 CC lib/util/base64.o 00:02:40.750 CC lib/util/cpuset.o 00:02:40.750 CC lib/util/bit_array.o 00:02:40.750 CC lib/util/crc32.o 00:02:40.750 CC lib/util/crc32c.o 00:02:40.750 CC lib/util/crc16.o 00:02:40.750 CXX lib/trace_parser/trace.o 00:02:40.750 CC lib/ioat/ioat.o 00:02:40.750 CC lib/dma/dma.o 00:02:40.750 CC lib/vfio_user/host/vfio_user_pci.o 00:02:40.750 CC lib/util/crc32_ieee.o 00:02:40.750 CC lib/util/crc64.o 00:02:40.750 CC lib/util/dif.o 00:02:40.750 CC lib/util/fd.o 00:02:40.750 CC lib/vfio_user/host/vfio_user.o 00:02:40.750 LIB libspdk_dma.a 00:02:40.750 CC lib/util/fd_group.o 00:02:40.750 CC lib/util/file.o 00:02:40.750 SO libspdk_dma.so.5.0 00:02:40.750 CC lib/util/hexlify.o 00:02:40.750 CC lib/util/iov.o 00:02:40.750 SYMLINK libspdk_dma.so 00:02:40.750 CC lib/util/math.o 00:02:40.750 LIB libspdk_ioat.a 00:02:40.750 SO libspdk_ioat.so.7.0 00:02:40.750 CC lib/util/net.o 00:02:40.750 CC lib/util/pipe.o 00:02:40.750 SYMLINK libspdk_ioat.so 00:02:40.750 CC lib/util/strerror_tls.o 00:02:40.750 CC lib/util/string.o 00:02:40.750 LIB libspdk_vfio_user.a 00:02:40.750 SO libspdk_vfio_user.so.5.0 00:02:40.750 CC lib/util/uuid.o 00:02:40.750 CC lib/util/xor.o 00:02:40.750 SYMLINK libspdk_vfio_user.so 00:02:40.750 CC lib/util/zipf.o 00:02:40.750 CC lib/util/md5.o 00:02:40.750 LIB libspdk_util.a 00:02:40.750 SO libspdk_util.so.10.1 00:02:40.750 LIB libspdk_trace_parser.a 00:02:40.750 SYMLINK libspdk_util.so 00:02:40.750 SO libspdk_trace_parser.so.6.0 00:02:40.750 SYMLINK libspdk_trace_parser.so 00:02:40.750 CC lib/idxd/idxd_user.o 00:02:40.750 CC lib/idxd/idxd.o 00:02:40.750 CC lib/idxd/idxd_kernel.o 00:02:40.750 CC lib/json/json_parse.o 00:02:40.750 CC lib/json/json_util.o 00:02:40.750 CC lib/json/json_write.o 00:02:40.750 CC lib/conf/conf.o 00:02:40.750 CC lib/env_dpdk/env.o 00:02:40.750 CC lib/rdma_utils/rdma_utils.o 00:02:40.750 CC lib/vmd/vmd.o 00:02:40.750 CC lib/vmd/led.o 00:02:40.750 LIB libspdk_conf.a 00:02:40.750 CC lib/env_dpdk/memory.o 00:02:40.750 SO libspdk_conf.so.6.0 00:02:40.750 CC lib/env_dpdk/pci.o 00:02:40.750 CC lib/env_dpdk/init.o 00:02:40.750 LIB libspdk_rdma_utils.a 00:02:40.750 SYMLINK libspdk_conf.so 00:02:40.750 CC lib/env_dpdk/threads.o 00:02:40.750 CC lib/env_dpdk/pci_ioat.o 00:02:40.750 LIB libspdk_json.a 00:02:40.750 SO libspdk_rdma_utils.so.1.0 00:02:40.750 SO libspdk_json.so.6.0 00:02:40.750 SYMLINK libspdk_rdma_utils.so 00:02:40.750 CC lib/env_dpdk/pci_virtio.o 00:02:40.750 SYMLINK libspdk_json.so 00:02:40.750 CC lib/env_dpdk/pci_vmd.o 00:02:40.750 CC lib/env_dpdk/pci_idxd.o 00:02:40.750 CC lib/env_dpdk/pci_event.o 00:02:40.750 CC lib/env_dpdk/sigbus_handler.o 00:02:40.750 CC lib/env_dpdk/pci_dpdk.o 00:02:40.750 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:40.750 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.750 CC lib/rdma_provider/common.o 00:02:40.750 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:40.750 LIB libspdk_idxd.a 00:02:40.750 SO libspdk_idxd.so.12.1 00:02:40.750 CC lib/jsonrpc/jsonrpc_server.o 00:02:40.750 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:40.750 LIB libspdk_vmd.a 00:02:40.750 SYMLINK libspdk_idxd.so 00:02:40.751 CC lib/jsonrpc/jsonrpc_client.o 00:02:40.751 SO libspdk_vmd.so.6.0 00:02:40.751 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.751 SYMLINK libspdk_vmd.so 00:02:40.751 LIB libspdk_rdma_provider.a 00:02:40.751 SO libspdk_rdma_provider.so.7.0 00:02:40.751 SYMLINK libspdk_rdma_provider.so 00:02:40.751 LIB libspdk_jsonrpc.a 00:02:40.751 SO libspdk_jsonrpc.so.6.0 00:02:40.751 SYMLINK libspdk_jsonrpc.so 00:02:41.009 LIB libspdk_env_dpdk.a 00:02:41.009 CC lib/rpc/rpc.o 00:02:41.009 SO libspdk_env_dpdk.so.15.1 00:02:41.268 LIB libspdk_rpc.a 00:02:41.268 SYMLINK libspdk_env_dpdk.so 00:02:41.268 SO libspdk_rpc.so.6.0 00:02:41.269 SYMLINK libspdk_rpc.so 00:02:41.836 CC lib/trace/trace_flags.o 00:02:41.836 CC lib/trace/trace.o 00:02:41.836 CC lib/notify/notify.o 00:02:41.836 CC lib/notify/notify_rpc.o 00:02:41.836 CC lib/trace/trace_rpc.o 00:02:41.836 CC lib/keyring/keyring.o 00:02:41.836 CC lib/keyring/keyring_rpc.o 00:02:41.836 LIB libspdk_notify.a 00:02:41.836 SO libspdk_notify.so.6.0 00:02:41.836 LIB libspdk_keyring.a 00:02:41.836 SYMLINK libspdk_notify.so 00:02:41.836 LIB libspdk_trace.a 00:02:41.836 SO libspdk_keyring.so.2.0 00:02:42.096 SO libspdk_trace.so.11.0 00:02:42.096 SYMLINK libspdk_keyring.so 00:02:42.096 SYMLINK libspdk_trace.so 00:02:42.355 CC lib/thread/thread.o 00:02:42.355 CC lib/thread/iobuf.o 00:02:42.355 CC lib/sock/sock.o 00:02:42.355 CC lib/sock/sock_rpc.o 00:02:42.944 LIB libspdk_sock.a 00:02:42.944 SO libspdk_sock.so.10.0 00:02:42.944 SYMLINK libspdk_sock.so 00:02:43.523 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:43.523 CC lib/nvme/nvme_ctrlr.o 00:02:43.523 CC lib/nvme/nvme_fabric.o 00:02:43.523 CC lib/nvme/nvme_ns_cmd.o 00:02:43.523 CC lib/nvme/nvme_ns.o 00:02:43.523 CC lib/nvme/nvme_pcie.o 00:02:43.523 CC lib/nvme/nvme_pcie_common.o 00:02:43.523 CC lib/nvme/nvme_qpair.o 00:02:43.523 CC lib/nvme/nvme.o 00:02:44.090 CC lib/nvme/nvme_quirks.o 00:02:44.090 CC lib/nvme/nvme_transport.o 00:02:44.090 LIB libspdk_thread.a 00:02:44.090 CC lib/nvme/nvme_discovery.o 00:02:44.090 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:44.090 SO libspdk_thread.so.11.0 00:02:44.090 SYMLINK libspdk_thread.so 00:02:44.090 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:44.090 CC lib/nvme/nvme_tcp.o 00:02:44.090 CC lib/nvme/nvme_opal.o 00:02:44.349 CC lib/accel/accel.o 00:02:44.608 CC lib/blob/blobstore.o 00:02:44.608 CC lib/blob/request.o 00:02:44.608 CC lib/init/json_config.o 00:02:44.608 CC lib/blob/zeroes.o 00:02:44.867 CC lib/virtio/virtio.o 00:02:44.867 CC lib/nvme/nvme_io_msg.o 00:02:44.867 CC lib/init/subsystem.o 00:02:44.867 CC lib/fsdev/fsdev.o 00:02:44.867 CC lib/blob/blob_bs_dev.o 00:02:44.867 CC lib/accel/accel_rpc.o 00:02:44.867 CC lib/init/subsystem_rpc.o 00:02:45.125 CC lib/virtio/virtio_vhost_user.o 00:02:45.125 CC lib/nvme/nvme_poll_group.o 00:02:45.125 CC lib/nvme/nvme_zns.o 00:02:45.125 CC lib/init/rpc.o 00:02:45.125 LIB libspdk_init.a 00:02:45.386 SO libspdk_init.so.6.0 00:02:45.386 CC lib/accel/accel_sw.o 00:02:45.386 CC lib/virtio/virtio_vfio_user.o 00:02:45.386 SYMLINK libspdk_init.so 00:02:45.386 CC lib/nvme/nvme_stubs.o 00:02:45.386 CC lib/fsdev/fsdev_io.o 00:02:45.386 CC lib/fsdev/fsdev_rpc.o 00:02:45.646 CC lib/virtio/virtio_pci.o 00:02:45.646 CC lib/nvme/nvme_auth.o 00:02:45.646 CC lib/nvme/nvme_cuse.o 00:02:45.646 CC lib/nvme/nvme_rdma.o 00:02:45.646 LIB libspdk_accel.a 00:02:45.646 SO libspdk_accel.so.16.0 00:02:45.646 CC lib/event/app.o 00:02:45.646 CC lib/event/reactor.o 00:02:45.646 SYMLINK libspdk_accel.so 00:02:45.646 CC lib/event/log_rpc.o 00:02:45.906 LIB libspdk_fsdev.a 00:02:45.906 LIB libspdk_virtio.a 00:02:45.906 CC lib/event/app_rpc.o 00:02:45.906 SO libspdk_virtio.so.7.0 00:02:45.906 SO libspdk_fsdev.so.2.0 00:02:45.906 SYMLINK libspdk_virtio.so 00:02:45.906 CC lib/event/scheduler_static.o 00:02:45.906 SYMLINK libspdk_fsdev.so 00:02:46.166 CC lib/bdev/bdev.o 00:02:46.166 CC lib/bdev/bdev_rpc.o 00:02:46.166 CC lib/bdev/bdev_zone.o 00:02:46.166 CC lib/bdev/part.o 00:02:46.166 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:46.166 LIB libspdk_event.a 00:02:46.166 SO libspdk_event.so.14.0 00:02:46.166 CC lib/bdev/scsi_nvme.o 00:02:46.426 SYMLINK libspdk_event.so 00:02:46.686 LIB libspdk_fuse_dispatcher.a 00:02:46.946 SO libspdk_fuse_dispatcher.so.1.0 00:02:46.946 LIB libspdk_nvme.a 00:02:46.946 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.946 SO libspdk_nvme.so.15.0 00:02:47.206 SYMLINK libspdk_nvme.so 00:02:48.145 LIB libspdk_blob.a 00:02:48.145 SO libspdk_blob.so.11.0 00:02:48.145 SYMLINK libspdk_blob.so 00:02:48.405 CC lib/blobfs/blobfs.o 00:02:48.405 CC lib/blobfs/tree.o 00:02:48.405 CC lib/lvol/lvol.o 00:02:48.665 LIB libspdk_bdev.a 00:02:48.925 SO libspdk_bdev.so.17.0 00:02:48.925 SYMLINK libspdk_bdev.so 00:02:49.184 CC lib/nvmf/ctrlr_bdev.o 00:02:49.184 CC lib/nvmf/ctrlr_discovery.o 00:02:49.184 CC lib/nvmf/subsystem.o 00:02:49.184 CC lib/scsi/dev.o 00:02:49.184 CC lib/nvmf/ctrlr.o 00:02:49.184 CC lib/nbd/nbd.o 00:02:49.184 CC lib/ftl/ftl_core.o 00:02:49.184 CC lib/ublk/ublk.o 00:02:49.184 LIB libspdk_blobfs.a 00:02:49.443 SO libspdk_blobfs.so.10.0 00:02:49.443 SYMLINK libspdk_blobfs.so 00:02:49.443 CC lib/ublk/ublk_rpc.o 00:02:49.443 LIB libspdk_lvol.a 00:02:49.443 SO libspdk_lvol.so.10.0 00:02:49.443 CC lib/scsi/lun.o 00:02:49.443 SYMLINK libspdk_lvol.so 00:02:49.444 CC lib/scsi/port.o 00:02:49.444 CC lib/scsi/scsi.o 00:02:49.704 CC lib/ftl/ftl_init.o 00:02:49.704 CC lib/scsi/scsi_bdev.o 00:02:49.704 CC lib/scsi/scsi_pr.o 00:02:49.704 CC lib/nbd/nbd_rpc.o 00:02:49.704 CC lib/scsi/scsi_rpc.o 00:02:49.704 CC lib/nvmf/nvmf.o 00:02:49.963 CC lib/ftl/ftl_layout.o 00:02:49.964 CC lib/scsi/task.o 00:02:49.964 LIB libspdk_nbd.a 00:02:49.964 SO libspdk_nbd.so.7.0 00:02:49.964 LIB libspdk_ublk.a 00:02:49.964 SO libspdk_ublk.so.3.0 00:02:49.964 SYMLINK libspdk_nbd.so 00:02:49.964 CC lib/ftl/ftl_debug.o 00:02:49.964 CC lib/nvmf/nvmf_rpc.o 00:02:49.964 CC lib/ftl/ftl_io.o 00:02:49.964 SYMLINK libspdk_ublk.so 00:02:49.964 CC lib/nvmf/transport.o 00:02:49.964 CC lib/ftl/ftl_sb.o 00:02:50.223 LIB libspdk_scsi.a 00:02:50.223 CC lib/ftl/ftl_l2p.o 00:02:50.223 CC lib/ftl/ftl_l2p_flat.o 00:02:50.223 SO libspdk_scsi.so.9.0 00:02:50.223 CC lib/ftl/ftl_nv_cache.o 00:02:50.223 CC lib/nvmf/tcp.o 00:02:50.223 SYMLINK libspdk_scsi.so 00:02:50.223 CC lib/nvmf/stubs.o 00:02:50.223 CC lib/nvmf/mdns_server.o 00:02:50.482 CC lib/nvmf/rdma.o 00:02:50.482 CC lib/nvmf/auth.o 00:02:50.742 CC lib/ftl/ftl_band.o 00:02:50.742 CC lib/ftl/ftl_band_ops.o 00:02:50.742 CC lib/iscsi/conn.o 00:02:51.001 CC lib/vhost/vhost.o 00:02:51.001 CC lib/vhost/vhost_rpc.o 00:02:51.001 CC lib/ftl/ftl_writer.o 00:02:51.001 CC lib/vhost/vhost_scsi.o 00:02:51.001 CC lib/iscsi/init_grp.o 00:02:51.261 CC lib/iscsi/iscsi.o 00:02:51.261 CC lib/ftl/ftl_rq.o 00:02:51.261 CC lib/ftl/ftl_reloc.o 00:02:51.261 CC lib/ftl/ftl_l2p_cache.o 00:02:51.261 CC lib/iscsi/param.o 00:02:51.520 CC lib/iscsi/portal_grp.o 00:02:51.520 CC lib/iscsi/tgt_node.o 00:02:51.779 CC lib/iscsi/iscsi_subsystem.o 00:02:51.779 CC lib/vhost/vhost_blk.o 00:02:51.779 CC lib/vhost/rte_vhost_user.o 00:02:51.779 CC lib/ftl/ftl_p2l.o 00:02:51.779 CC lib/ftl/ftl_p2l_log.o 00:02:52.037 CC lib/iscsi/iscsi_rpc.o 00:02:52.037 CC lib/iscsi/task.o 00:02:52.037 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.037 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.037 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.037 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.296 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.296 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.296 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:52.296 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:52.296 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:52.296 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:52.555 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.555 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.555 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.555 CC lib/ftl/utils/ftl_conf.o 00:02:52.555 CC lib/ftl/utils/ftl_md.o 00:02:52.555 CC lib/ftl/utils/ftl_mempool.o 00:02:52.555 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.555 LIB libspdk_iscsi.a 00:02:52.555 CC lib/ftl/utils/ftl_property.o 00:02:52.813 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.813 SO libspdk_iscsi.so.8.0 00:02:52.813 LIB libspdk_vhost.a 00:02:52.813 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.813 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.813 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.813 SO libspdk_vhost.so.8.0 00:02:52.813 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.813 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.813 SYMLINK libspdk_iscsi.so 00:02:52.813 LIB libspdk_nvmf.a 00:02:52.813 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.072 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.072 SYMLINK libspdk_vhost.so 00:02:53.072 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.072 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.072 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.072 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.072 SO libspdk_nvmf.so.20.0 00:02:53.072 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.072 CC lib/ftl/base/ftl_base_dev.o 00:02:53.072 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.072 CC lib/ftl/ftl_trace.o 00:02:53.331 SYMLINK libspdk_nvmf.so 00:02:53.331 LIB libspdk_ftl.a 00:02:53.589 SO libspdk_ftl.so.9.0 00:02:53.848 SYMLINK libspdk_ftl.so 00:02:54.108 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.367 CC module/accel/ioat/accel_ioat.o 00:02:54.367 CC module/accel/dsa/accel_dsa.o 00:02:54.367 CC module/sock/posix/posix.o 00:02:54.367 CC module/fsdev/aio/fsdev_aio.o 00:02:54.367 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.367 CC module/accel/error/accel_error.o 00:02:54.367 CC module/blob/bdev/blob_bdev.o 00:02:54.367 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.367 CC module/keyring/file/keyring.o 00:02:54.367 LIB libspdk_env_dpdk_rpc.a 00:02:54.367 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.367 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.367 CC module/keyring/file/keyring_rpc.o 00:02:54.367 CC module/accel/error/accel_error_rpc.o 00:02:54.367 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.367 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.367 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.367 LIB libspdk_scheduler_dynamic.a 00:02:54.686 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.686 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.686 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.686 LIB libspdk_blob_bdev.a 00:02:54.686 LIB libspdk_keyring_file.a 00:02:54.686 LIB libspdk_accel_error.a 00:02:54.686 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.686 SO libspdk_blob_bdev.so.11.0 00:02:54.686 SO libspdk_keyring_file.so.2.0 00:02:54.686 SO libspdk_accel_error.so.2.0 00:02:54.686 LIB libspdk_accel_ioat.a 00:02:54.686 SO libspdk_accel_ioat.so.6.0 00:02:54.686 CC module/accel/iaa/accel_iaa.o 00:02:54.686 SYMLINK libspdk_keyring_file.so 00:02:54.686 SYMLINK libspdk_blob_bdev.so 00:02:54.686 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.686 SYMLINK libspdk_accel_error.so 00:02:54.686 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:54.686 CC module/fsdev/aio/linux_aio_mgr.o 00:02:54.686 CC module/keyring/linux/keyring.o 00:02:54.686 SYMLINK libspdk_accel_ioat.so 00:02:54.686 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.686 LIB libspdk_accel_dsa.a 00:02:54.686 SO libspdk_accel_dsa.so.5.0 00:02:54.946 CC module/keyring/linux/keyring_rpc.o 00:02:54.946 SYMLINK libspdk_accel_dsa.so 00:02:54.946 LIB libspdk_accel_iaa.a 00:02:54.946 LIB libspdk_scheduler_gscheduler.a 00:02:54.946 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.946 SO libspdk_accel_iaa.so.3.0 00:02:54.946 SYMLINK libspdk_accel_iaa.so 00:02:54.946 LIB libspdk_keyring_linux.a 00:02:54.946 SYMLINK libspdk_scheduler_gscheduler.so 00:02:54.946 CC module/bdev/delay/vbdev_delay.o 00:02:54.946 SO libspdk_keyring_linux.so.1.0 00:02:54.946 CC module/bdev/gpt/gpt.o 00:02:54.946 CC module/bdev/error/vbdev_error.o 00:02:54.946 LIB libspdk_fsdev_aio.a 00:02:54.946 SO libspdk_fsdev_aio.so.1.0 00:02:54.946 SYMLINK libspdk_keyring_linux.so 00:02:54.946 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.946 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.946 LIB libspdk_sock_posix.a 00:02:54.946 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.946 CC module/bdev/malloc/bdev_malloc.o 00:02:54.946 SYMLINK libspdk_fsdev_aio.so 00:02:54.946 SO libspdk_sock_posix.so.6.0 00:02:54.946 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.946 CC module/bdev/null/bdev_null.o 00:02:55.206 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.206 SYMLINK libspdk_sock_posix.so 00:02:55.206 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.206 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.206 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.206 CC module/bdev/null/bdev_null_rpc.o 00:02:55.206 LIB libspdk_blobfs_bdev.a 00:02:55.206 SO libspdk_blobfs_bdev.so.6.0 00:02:55.206 LIB libspdk_bdev_delay.a 00:02:55.206 LIB libspdk_bdev_error.a 00:02:55.206 SYMLINK libspdk_blobfs_bdev.so 00:02:55.206 SO libspdk_bdev_delay.so.6.0 00:02:55.206 SO libspdk_bdev_error.so.6.0 00:02:55.466 LIB libspdk_bdev_null.a 00:02:55.466 SYMLINK libspdk_bdev_error.so 00:02:55.466 SYMLINK libspdk_bdev_delay.so 00:02:55.466 LIB libspdk_bdev_gpt.a 00:02:55.466 SO libspdk_bdev_null.so.6.0 00:02:55.466 SO libspdk_bdev_gpt.so.6.0 00:02:55.466 LIB libspdk_bdev_malloc.a 00:02:55.466 CC module/bdev/nvme/bdev_nvme.o 00:02:55.466 SO libspdk_bdev_malloc.so.6.0 00:02:55.466 SYMLINK libspdk_bdev_gpt.so 00:02:55.466 SYMLINK libspdk_bdev_null.so 00:02:55.466 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.466 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:55.466 CC module/bdev/raid/bdev_raid.o 00:02:55.466 CC module/bdev/split/vbdev_split.o 00:02:55.466 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:55.466 CC module/bdev/split/vbdev_split_rpc.o 00:02:55.466 SYMLINK libspdk_bdev_malloc.so 00:02:55.466 CC module/bdev/raid/bdev_raid_rpc.o 00:02:55.466 LIB libspdk_bdev_lvol.a 00:02:55.726 SO libspdk_bdev_lvol.so.6.0 00:02:55.726 CC module/bdev/aio/bdev_aio.o 00:02:55.726 SYMLINK libspdk_bdev_lvol.so 00:02:55.726 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.726 CC module/bdev/raid/bdev_raid_sb.o 00:02:55.726 LIB libspdk_bdev_split.a 00:02:55.726 SO libspdk_bdev_split.so.6.0 00:02:55.726 CC module/bdev/ftl/bdev_ftl.o 00:02:55.726 LIB libspdk_bdev_passthru.a 00:02:55.726 SYMLINK libspdk_bdev_split.so 00:02:55.726 SO libspdk_bdev_passthru.so.6.0 00:02:55.726 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:55.726 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:55.985 SYMLINK libspdk_bdev_passthru.so 00:02:55.985 CC module/bdev/nvme/nvme_rpc.o 00:02:55.985 CC module/bdev/iscsi/bdev_iscsi.o 00:02:55.985 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.985 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.985 LIB libspdk_bdev_aio.a 00:02:55.985 LIB libspdk_bdev_zone_block.a 00:02:55.985 SO libspdk_bdev_aio.so.6.0 00:02:55.985 SO libspdk_bdev_zone_block.so.6.0 00:02:55.985 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:55.985 SYMLINK libspdk_bdev_aio.so 00:02:55.985 SYMLINK libspdk_bdev_zone_block.so 00:02:55.985 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.985 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:55.985 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.245 CC module/bdev/raid/raid0.o 00:02:56.245 CC module/bdev/nvme/vbdev_opal.o 00:02:56.245 LIB libspdk_bdev_ftl.a 00:02:56.245 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.245 LIB libspdk_bdev_iscsi.a 00:02:56.245 SO libspdk_bdev_ftl.so.6.0 00:02:56.245 SO libspdk_bdev_iscsi.so.6.0 00:02:56.245 SYMLINK libspdk_bdev_ftl.so 00:02:56.245 CC module/bdev/raid/raid1.o 00:02:56.245 CC module/bdev/raid/concat.o 00:02:56.245 SYMLINK libspdk_bdev_iscsi.so 00:02:56.245 CC module/bdev/raid/raid5f.o 00:02:56.504 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.504 LIB libspdk_bdev_virtio.a 00:02:56.504 SO libspdk_bdev_virtio.so.6.0 00:02:56.504 SYMLINK libspdk_bdev_virtio.so 00:02:57.073 LIB libspdk_bdev_raid.a 00:02:57.073 SO libspdk_bdev_raid.so.6.0 00:02:57.073 SYMLINK libspdk_bdev_raid.so 00:02:58.009 LIB libspdk_bdev_nvme.a 00:02:58.268 SO libspdk_bdev_nvme.so.7.1 00:02:58.268 SYMLINK libspdk_bdev_nvme.so 00:02:58.838 CC module/event/subsystems/vmd/vmd.o 00:02:58.838 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.838 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.838 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.838 CC module/event/subsystems/sock/sock.o 00:02:58.838 CC module/event/subsystems/keyring/keyring.o 00:02:58.838 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.838 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.838 CC module/event/subsystems/fsdev/fsdev.o 00:02:59.098 LIB libspdk_event_scheduler.a 00:02:59.098 LIB libspdk_event_sock.a 00:02:59.098 LIB libspdk_event_vmd.a 00:02:59.098 LIB libspdk_event_keyring.a 00:02:59.098 LIB libspdk_event_fsdev.a 00:02:59.098 LIB libspdk_event_vhost_blk.a 00:02:59.098 SO libspdk_event_scheduler.so.4.0 00:02:59.098 SO libspdk_event_sock.so.5.0 00:02:59.098 SO libspdk_event_keyring.so.1.0 00:02:59.098 LIB libspdk_event_iobuf.a 00:02:59.098 SO libspdk_event_vmd.so.6.0 00:02:59.098 SO libspdk_event_fsdev.so.1.0 00:02:59.098 SO libspdk_event_vhost_blk.so.3.0 00:02:59.098 SO libspdk_event_iobuf.so.3.0 00:02:59.098 SYMLINK libspdk_event_sock.so 00:02:59.098 SYMLINK libspdk_event_scheduler.so 00:02:59.098 SYMLINK libspdk_event_keyring.so 00:02:59.098 SYMLINK libspdk_event_vmd.so 00:02:59.098 SYMLINK libspdk_event_fsdev.so 00:02:59.098 SYMLINK libspdk_event_vhost_blk.so 00:02:59.098 SYMLINK libspdk_event_iobuf.so 00:02:59.357 CC module/event/subsystems/accel/accel.o 00:02:59.616 LIB libspdk_event_accel.a 00:02:59.616 SO libspdk_event_accel.so.6.0 00:02:59.616 SYMLINK libspdk_event_accel.so 00:03:00.185 CC module/event/subsystems/bdev/bdev.o 00:03:00.185 LIB libspdk_event_bdev.a 00:03:00.185 SO libspdk_event_bdev.so.6.0 00:03:00.445 SYMLINK libspdk_event_bdev.so 00:03:00.703 CC module/event/subsystems/scsi/scsi.o 00:03:00.703 CC module/event/subsystems/ublk/ublk.o 00:03:00.703 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.703 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.703 CC module/event/subsystems/nbd/nbd.o 00:03:00.703 LIB libspdk_event_scsi.a 00:03:00.703 LIB libspdk_event_nbd.a 00:03:00.961 LIB libspdk_event_ublk.a 00:03:00.961 SO libspdk_event_scsi.so.6.0 00:03:00.961 SO libspdk_event_nbd.so.6.0 00:03:00.961 SO libspdk_event_ublk.so.3.0 00:03:00.961 SYMLINK libspdk_event_scsi.so 00:03:00.961 SYMLINK libspdk_event_ublk.so 00:03:00.961 LIB libspdk_event_nvmf.a 00:03:00.961 SYMLINK libspdk_event_nbd.so 00:03:00.961 SO libspdk_event_nvmf.so.6.0 00:03:00.961 SYMLINK libspdk_event_nvmf.so 00:03:01.220 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.220 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.478 LIB libspdk_event_vhost_scsi.a 00:03:01.478 LIB libspdk_event_iscsi.a 00:03:01.478 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.478 SO libspdk_event_iscsi.so.6.0 00:03:01.478 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.478 SYMLINK libspdk_event_iscsi.so 00:03:01.737 SO libspdk.so.6.0 00:03:01.737 SYMLINK libspdk.so 00:03:01.997 CC app/spdk_lspci/spdk_lspci.o 00:03:01.997 CXX app/trace/trace.o 00:03:01.997 CC app/spdk_nvme_identify/identify.o 00:03:01.997 CC app/spdk_nvme_perf/perf.o 00:03:01.997 CC app/trace_record/trace_record.o 00:03:01.997 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.997 CC app/nvmf_tgt/nvmf_main.o 00:03:01.997 CC app/spdk_tgt/spdk_tgt.o 00:03:02.256 CC examples/util/zipf/zipf.o 00:03:02.256 CC test/thread/poller_perf/poller_perf.o 00:03:02.256 LINK spdk_lspci 00:03:02.256 LINK nvmf_tgt 00:03:02.256 LINK iscsi_tgt 00:03:02.256 LINK spdk_trace_record 00:03:02.256 LINK zipf 00:03:02.256 LINK spdk_tgt 00:03:02.256 LINK poller_perf 00:03:02.515 LINK spdk_trace 00:03:02.515 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.515 CC app/spdk_top/spdk_top.o 00:03:02.515 CC examples/ioat/perf/perf.o 00:03:02.515 CC app/spdk_dd/spdk_dd.o 00:03:02.774 CC test/dma/test_dma/test_dma.o 00:03:02.774 LINK spdk_nvme_discover 00:03:02.774 CC app/fio/nvme/fio_plugin.o 00:03:02.774 CC test/app/bdev_svc/bdev_svc.o 00:03:02.774 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.774 LINK ioat_perf 00:03:03.088 LINK bdev_svc 00:03:03.088 LINK spdk_nvme_perf 00:03:03.088 CC app/vhost/vhost.o 00:03:03.088 LINK spdk_dd 00:03:03.088 LINK spdk_nvme_identify 00:03:03.088 CC examples/ioat/verify/verify.o 00:03:03.088 LINK vhost 00:03:03.088 CC test/app/histogram_perf/histogram_perf.o 00:03:03.348 LINK test_dma 00:03:03.348 LINK nvme_fuzz 00:03:03.348 CC app/fio/bdev/fio_plugin.o 00:03:03.348 LINK verify 00:03:03.348 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.348 LINK spdk_nvme 00:03:03.348 LINK histogram_perf 00:03:03.348 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:03.348 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:03.348 TEST_HEADER include/spdk/accel.h 00:03:03.348 TEST_HEADER include/spdk/accel_module.h 00:03:03.348 TEST_HEADER include/spdk/assert.h 00:03:03.348 TEST_HEADER include/spdk/barrier.h 00:03:03.348 TEST_HEADER include/spdk/base64.h 00:03:03.348 TEST_HEADER include/spdk/bdev.h 00:03:03.606 TEST_HEADER include/spdk/bdev_module.h 00:03:03.607 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.607 TEST_HEADER include/spdk/bit_array.h 00:03:03.607 TEST_HEADER include/spdk/bit_pool.h 00:03:03.607 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.607 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.607 TEST_HEADER include/spdk/blobfs.h 00:03:03.607 TEST_HEADER include/spdk/blob.h 00:03:03.607 TEST_HEADER include/spdk/conf.h 00:03:03.607 TEST_HEADER include/spdk/config.h 00:03:03.607 LINK spdk_top 00:03:03.607 TEST_HEADER include/spdk/cpuset.h 00:03:03.607 TEST_HEADER include/spdk/crc16.h 00:03:03.607 TEST_HEADER include/spdk/crc32.h 00:03:03.607 TEST_HEADER include/spdk/crc64.h 00:03:03.607 TEST_HEADER include/spdk/dif.h 00:03:03.607 TEST_HEADER include/spdk/dma.h 00:03:03.607 TEST_HEADER include/spdk/endian.h 00:03:03.607 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.607 TEST_HEADER include/spdk/env.h 00:03:03.607 TEST_HEADER include/spdk/event.h 00:03:03.607 TEST_HEADER include/spdk/fd_group.h 00:03:03.607 TEST_HEADER include/spdk/fd.h 00:03:03.607 TEST_HEADER include/spdk/file.h 00:03:03.607 TEST_HEADER include/spdk/fsdev.h 00:03:03.607 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.607 TEST_HEADER include/spdk/ftl.h 00:03:03.607 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:03.607 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.607 TEST_HEADER include/spdk/hexlify.h 00:03:03.607 TEST_HEADER include/spdk/histogram_data.h 00:03:03.607 TEST_HEADER include/spdk/idxd.h 00:03:03.607 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.607 CC test/app/jsoncat/jsoncat.o 00:03:03.607 TEST_HEADER include/spdk/init.h 00:03:03.607 TEST_HEADER include/spdk/ioat.h 00:03:03.607 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.607 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.607 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.607 TEST_HEADER include/spdk/json.h 00:03:03.607 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.607 TEST_HEADER include/spdk/keyring.h 00:03:03.607 TEST_HEADER include/spdk/keyring_module.h 00:03:03.607 TEST_HEADER include/spdk/likely.h 00:03:03.607 TEST_HEADER include/spdk/log.h 00:03:03.607 TEST_HEADER include/spdk/lvol.h 00:03:03.607 CC examples/idxd/perf/perf.o 00:03:03.607 TEST_HEADER include/spdk/md5.h 00:03:03.607 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.607 TEST_HEADER include/spdk/memory.h 00:03:03.607 TEST_HEADER include/spdk/mmio.h 00:03:03.607 TEST_HEADER include/spdk/nbd.h 00:03:03.607 TEST_HEADER include/spdk/net.h 00:03:03.607 TEST_HEADER include/spdk/notify.h 00:03:03.607 TEST_HEADER include/spdk/nvme.h 00:03:03.607 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.607 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.607 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.607 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.607 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.607 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.607 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.607 TEST_HEADER include/spdk/nvmf.h 00:03:03.607 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.607 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.607 TEST_HEADER include/spdk/opal.h 00:03:03.607 TEST_HEADER include/spdk/opal_spec.h 00:03:03.607 TEST_HEADER include/spdk/pci_ids.h 00:03:03.607 TEST_HEADER include/spdk/pipe.h 00:03:03.607 TEST_HEADER include/spdk/queue.h 00:03:03.607 TEST_HEADER include/spdk/reduce.h 00:03:03.607 TEST_HEADER include/spdk/rpc.h 00:03:03.607 TEST_HEADER include/spdk/scheduler.h 00:03:03.607 TEST_HEADER include/spdk/scsi.h 00:03:03.607 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.607 TEST_HEADER include/spdk/sock.h 00:03:03.607 TEST_HEADER include/spdk/stdinc.h 00:03:03.607 TEST_HEADER include/spdk/string.h 00:03:03.607 TEST_HEADER include/spdk/thread.h 00:03:03.607 TEST_HEADER include/spdk/trace.h 00:03:03.607 TEST_HEADER include/spdk/trace_parser.h 00:03:03.607 TEST_HEADER include/spdk/tree.h 00:03:03.607 TEST_HEADER include/spdk/ublk.h 00:03:03.607 TEST_HEADER include/spdk/util.h 00:03:03.607 TEST_HEADER include/spdk/uuid.h 00:03:03.607 TEST_HEADER include/spdk/version.h 00:03:03.607 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.607 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.607 TEST_HEADER include/spdk/vhost.h 00:03:03.607 TEST_HEADER include/spdk/vmd.h 00:03:03.607 TEST_HEADER include/spdk/xor.h 00:03:03.607 TEST_HEADER include/spdk/zipf.h 00:03:03.607 CXX test/cpp_headers/accel.o 00:03:03.607 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.607 LINK lsvmd 00:03:03.607 LINK jsoncat 00:03:03.607 LINK spdk_bdev 00:03:03.607 LINK interrupt_tgt 00:03:03.866 CC test/env/vtophys/vtophys.o 00:03:03.866 LINK vhost_fuzz 00:03:03.866 CXX test/cpp_headers/accel_module.o 00:03:03.866 CXX test/cpp_headers/assert.o 00:03:03.866 CXX test/cpp_headers/barrier.o 00:03:03.866 LINK vtophys 00:03:03.866 LINK idxd_perf 00:03:03.866 CXX test/cpp_headers/base64.o 00:03:03.866 CC examples/vmd/led/led.o 00:03:04.125 CXX test/cpp_headers/bdev.o 00:03:04.125 CC test/app/stub/stub.o 00:03:04.125 CC test/event/event_perf/event_perf.o 00:03:04.125 LINK led 00:03:04.125 CC test/event/reactor/reactor.o 00:03:04.125 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.125 CC examples/thread/thread/thread_ex.o 00:03:04.125 LINK mem_callbacks 00:03:04.125 CC examples/sock/hello_world/hello_sock.o 00:03:04.125 LINK event_perf 00:03:04.125 LINK reactor 00:03:04.125 LINK stub 00:03:04.125 CXX test/cpp_headers/bdev_module.o 00:03:04.384 LINK env_dpdk_post_init 00:03:04.384 CC test/env/memory/memory_ut.o 00:03:04.384 CXX test/cpp_headers/bdev_zone.o 00:03:04.384 LINK thread 00:03:04.384 CC test/env/pci/pci_ut.o 00:03:04.384 CC test/event/reactor_perf/reactor_perf.o 00:03:04.384 CXX test/cpp_headers/bit_array.o 00:03:04.384 LINK hello_sock 00:03:04.384 CC test/event/app_repeat/app_repeat.o 00:03:04.643 CC test/event/scheduler/scheduler.o 00:03:04.643 LINK reactor_perf 00:03:04.643 CXX test/cpp_headers/bit_pool.o 00:03:04.643 LINK app_repeat 00:03:04.643 CC test/rpc_client/rpc_client_test.o 00:03:04.643 CC test/nvme/aer/aer.o 00:03:04.643 CXX test/cpp_headers/blob_bdev.o 00:03:04.643 CC examples/accel/perf/accel_perf.o 00:03:04.902 LINK scheduler 00:03:04.902 LINK pci_ut 00:03:04.902 LINK rpc_client_test 00:03:04.902 CC test/nvme/reset/reset.o 00:03:04.902 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.902 CC test/nvme/sgl/sgl.o 00:03:04.902 CXX test/cpp_headers/blobfs.o 00:03:04.902 LINK aer 00:03:05.160 CC test/nvme/e2edp/nvme_dp.o 00:03:05.160 LINK reset 00:03:05.160 CXX test/cpp_headers/blob.o 00:03:05.160 CC test/nvme/overhead/overhead.o 00:03:05.160 LINK sgl 00:03:05.160 LINK iscsi_fuzz 00:03:05.160 CC test/nvme/err_injection/err_injection.o 00:03:05.160 CC test/accel/dif/dif.o 00:03:05.161 LINK accel_perf 00:03:05.419 CXX test/cpp_headers/conf.o 00:03:05.419 CXX test/cpp_headers/config.o 00:03:05.419 LINK nvme_dp 00:03:05.419 LINK err_injection 00:03:05.419 LINK overhead 00:03:05.419 CXX test/cpp_headers/cpuset.o 00:03:05.419 CXX test/cpp_headers/crc16.o 00:03:05.419 LINK memory_ut 00:03:05.419 CC test/blobfs/mkfs/mkfs.o 00:03:05.680 CC examples/nvme/hello_world/hello_world.o 00:03:05.680 CC examples/blob/hello_world/hello_blob.o 00:03:05.680 CC test/lvol/esnap/esnap.o 00:03:05.680 CXX test/cpp_headers/crc32.o 00:03:05.680 CC examples/nvme/reconnect/reconnect.o 00:03:05.680 CC test/nvme/startup/startup.o 00:03:05.680 CXX test/cpp_headers/crc64.o 00:03:05.680 LINK mkfs 00:03:05.680 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:05.680 LINK hello_world 00:03:05.940 LINK hello_blob 00:03:05.940 CXX test/cpp_headers/dif.o 00:03:05.940 LINK startup 00:03:05.940 CC test/nvme/reserve/reserve.o 00:03:05.940 CXX test/cpp_headers/dma.o 00:03:05.940 CC test/nvme/simple_copy/simple_copy.o 00:03:05.940 LINK dif 00:03:05.940 LINK reconnect 00:03:05.940 CXX test/cpp_headers/endian.o 00:03:05.940 LINK hello_fsdev 00:03:05.940 CC test/nvme/connect_stress/connect_stress.o 00:03:05.940 LINK reserve 00:03:06.200 CC examples/blob/cli/blobcli.o 00:03:06.200 CXX test/cpp_headers/env_dpdk.o 00:03:06.200 LINK simple_copy 00:03:06.200 LINK connect_stress 00:03:06.200 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.200 CC test/nvme/boot_partition/boot_partition.o 00:03:06.200 CC test/nvme/compliance/nvme_compliance.o 00:03:06.200 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.200 CXX test/cpp_headers/env.o 00:03:06.201 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.461 LINK boot_partition 00:03:06.461 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.461 CC examples/nvme/arbitration/arbitration.o 00:03:06.461 CXX test/cpp_headers/event.o 00:03:06.461 LINK fused_ordering 00:03:06.461 LINK hello_bdev 00:03:06.461 CXX test/cpp_headers/fd_group.o 00:03:06.461 LINK blobcli 00:03:06.461 LINK doorbell_aers 00:03:06.461 LINK nvme_compliance 00:03:06.720 CC test/nvme/fdp/fdp.o 00:03:06.720 CXX test/cpp_headers/fd.o 00:03:06.720 LINK nvme_manage 00:03:06.720 CXX test/cpp_headers/file.o 00:03:06.720 LINK arbitration 00:03:06.720 CC test/bdev/bdevio/bdevio.o 00:03:06.720 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.721 CC examples/nvme/hotplug/hotplug.o 00:03:06.721 CC test/nvme/cuse/cuse.o 00:03:06.980 CXX test/cpp_headers/fsdev.o 00:03:06.980 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.980 CC examples/nvme/abort/abort.o 00:03:06.980 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.980 LINK fdp 00:03:06.980 CXX test/cpp_headers/fsdev_module.o 00:03:06.980 LINK hotplug 00:03:06.980 LINK cmb_copy 00:03:07.239 LINK bdevio 00:03:07.239 LINK pmr_persistence 00:03:07.239 CXX test/cpp_headers/ftl.o 00:03:07.239 CXX test/cpp_headers/fuse_dispatcher.o 00:03:07.239 CXX test/cpp_headers/gpt_spec.o 00:03:07.239 CXX test/cpp_headers/hexlify.o 00:03:07.239 CXX test/cpp_headers/histogram_data.o 00:03:07.239 LINK abort 00:03:07.239 CXX test/cpp_headers/idxd.o 00:03:07.239 CXX test/cpp_headers/idxd_spec.o 00:03:07.239 CXX test/cpp_headers/init.o 00:03:07.239 CXX test/cpp_headers/ioat.o 00:03:07.498 CXX test/cpp_headers/ioat_spec.o 00:03:07.498 CXX test/cpp_headers/iscsi_spec.o 00:03:07.498 CXX test/cpp_headers/json.o 00:03:07.498 CXX test/cpp_headers/jsonrpc.o 00:03:07.498 CXX test/cpp_headers/keyring.o 00:03:07.498 CXX test/cpp_headers/keyring_module.o 00:03:07.498 CXX test/cpp_headers/likely.o 00:03:07.498 CXX test/cpp_headers/log.o 00:03:07.498 CXX test/cpp_headers/lvol.o 00:03:07.498 LINK bdevperf 00:03:07.498 CXX test/cpp_headers/md5.o 00:03:07.498 CXX test/cpp_headers/memory.o 00:03:07.498 CXX test/cpp_headers/mmio.o 00:03:07.757 CXX test/cpp_headers/nbd.o 00:03:07.757 CXX test/cpp_headers/net.o 00:03:07.757 CXX test/cpp_headers/notify.o 00:03:07.757 CXX test/cpp_headers/nvme.o 00:03:07.757 CXX test/cpp_headers/nvme_intel.o 00:03:07.757 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.757 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.757 CXX test/cpp_headers/nvme_spec.o 00:03:07.757 CXX test/cpp_headers/nvme_zns.o 00:03:07.757 CXX test/cpp_headers/nvmf_cmd.o 00:03:08.016 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:08.016 CXX test/cpp_headers/nvmf.o 00:03:08.016 CXX test/cpp_headers/nvmf_spec.o 00:03:08.016 CXX test/cpp_headers/nvmf_transport.o 00:03:08.016 CXX test/cpp_headers/opal.o 00:03:08.016 CXX test/cpp_headers/opal_spec.o 00:03:08.016 CC examples/nvmf/nvmf/nvmf.o 00:03:08.016 CXX test/cpp_headers/pci_ids.o 00:03:08.016 CXX test/cpp_headers/pipe.o 00:03:08.016 CXX test/cpp_headers/queue.o 00:03:08.016 LINK cuse 00:03:08.016 CXX test/cpp_headers/reduce.o 00:03:08.016 CXX test/cpp_headers/rpc.o 00:03:08.276 CXX test/cpp_headers/scheduler.o 00:03:08.276 CXX test/cpp_headers/scsi.o 00:03:08.276 CXX test/cpp_headers/scsi_spec.o 00:03:08.276 CXX test/cpp_headers/sock.o 00:03:08.276 CXX test/cpp_headers/stdinc.o 00:03:08.276 CXX test/cpp_headers/string.o 00:03:08.276 CXX test/cpp_headers/thread.o 00:03:08.276 CXX test/cpp_headers/trace.o 00:03:08.276 CXX test/cpp_headers/trace_parser.o 00:03:08.276 CXX test/cpp_headers/tree.o 00:03:08.276 LINK nvmf 00:03:08.276 CXX test/cpp_headers/ublk.o 00:03:08.276 CXX test/cpp_headers/util.o 00:03:08.276 CXX test/cpp_headers/uuid.o 00:03:08.276 CXX test/cpp_headers/version.o 00:03:08.276 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.535 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.535 CXX test/cpp_headers/vhost.o 00:03:08.535 CXX test/cpp_headers/vmd.o 00:03:08.535 CXX test/cpp_headers/xor.o 00:03:08.535 CXX test/cpp_headers/zipf.o 00:03:11.072 LINK esnap 00:03:11.332 00:03:11.332 real 1m18.279s 00:03:11.332 user 6m53.292s 00:03:11.332 sys 1m32.377s 00:03:11.332 10:31:37 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.332 10:31:37 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.332 ************************************ 00:03:11.332 END TEST make 00:03:11.332 ************************************ 00:03:11.332 10:31:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.332 10:31:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.332 10:31:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.332 10:31:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.332 10:31:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.332 10:31:37 -- pm/common@44 -- $ pid=5467 00:03:11.332 10:31:37 -- pm/common@50 -- $ kill -TERM 5467 00:03:11.332 10:31:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.332 10:31:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.332 10:31:37 -- pm/common@44 -- $ pid=5469 00:03:11.332 10:31:37 -- pm/common@50 -- $ kill -TERM 5469 00:03:11.332 10:31:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.332 10:31:37 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:11.597 10:31:37 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.597 10:31:37 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.597 10:31:37 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.597 10:31:37 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.597 10:31:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.597 10:31:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.597 10:31:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.597 10:31:37 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.597 10:31:37 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.597 10:31:37 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.597 10:31:37 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.597 10:31:37 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.597 10:31:37 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.597 10:31:37 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.597 10:31:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.597 10:31:37 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.597 10:31:37 -- scripts/common.sh@345 -- # : 1 00:03:11.597 10:31:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.597 10:31:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.597 10:31:37 -- scripts/common.sh@365 -- # decimal 1 00:03:11.597 10:31:37 -- scripts/common.sh@353 -- # local d=1 00:03:11.597 10:31:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.597 10:31:37 -- scripts/common.sh@355 -- # echo 1 00:03:11.597 10:31:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.597 10:31:37 -- scripts/common.sh@366 -- # decimal 2 00:03:11.597 10:31:37 -- scripts/common.sh@353 -- # local d=2 00:03:11.597 10:31:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.597 10:31:37 -- scripts/common.sh@355 -- # echo 2 00:03:11.597 10:31:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.597 10:31:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.597 10:31:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.597 10:31:37 -- scripts/common.sh@368 -- # return 0 00:03:11.597 10:31:37 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.597 10:31:37 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.597 --rc genhtml_branch_coverage=1 00:03:11.597 --rc genhtml_function_coverage=1 00:03:11.597 --rc genhtml_legend=1 00:03:11.597 --rc geninfo_all_blocks=1 00:03:11.597 --rc geninfo_unexecuted_blocks=1 00:03:11.597 00:03:11.597 ' 00:03:11.597 10:31:37 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.597 --rc genhtml_branch_coverage=1 00:03:11.597 --rc genhtml_function_coverage=1 00:03:11.597 --rc genhtml_legend=1 00:03:11.597 --rc geninfo_all_blocks=1 00:03:11.597 --rc geninfo_unexecuted_blocks=1 00:03:11.597 00:03:11.597 ' 00:03:11.597 10:31:37 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.597 --rc genhtml_branch_coverage=1 00:03:11.597 --rc genhtml_function_coverage=1 00:03:11.597 --rc genhtml_legend=1 00:03:11.597 --rc geninfo_all_blocks=1 00:03:11.597 --rc geninfo_unexecuted_blocks=1 00:03:11.597 00:03:11.597 ' 00:03:11.598 10:31:37 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.598 --rc genhtml_branch_coverage=1 00:03:11.598 --rc genhtml_function_coverage=1 00:03:11.598 --rc genhtml_legend=1 00:03:11.598 --rc geninfo_all_blocks=1 00:03:11.598 --rc geninfo_unexecuted_blocks=1 00:03:11.598 00:03:11.598 ' 00:03:11.598 10:31:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:11.598 10:31:37 -- nvmf/common.sh@7 -- # uname -s 00:03:11.598 10:31:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.598 10:31:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.598 10:31:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.598 10:31:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.598 10:31:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.598 10:31:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.598 10:31:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.598 10:31:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.598 10:31:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.598 10:31:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.598 10:31:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:83f02efc-e39e-4041-b990-41110c7eb81d 00:03:11.598 10:31:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=83f02efc-e39e-4041-b990-41110c7eb81d 00:03:11.598 10:31:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.598 10:31:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.598 10:31:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:11.598 10:31:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.598 10:31:37 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.598 10:31:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.598 10:31:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.598 10:31:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.598 10:31:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.598 10:31:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.598 10:31:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.598 10:31:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.598 10:31:37 -- paths/export.sh@5 -- # export PATH 00:03:11.598 10:31:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.598 10:31:37 -- nvmf/common.sh@51 -- # : 0 00:03:11.598 10:31:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.598 10:31:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.598 10:31:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.598 10:31:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.598 10:31:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.598 10:31:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.598 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.598 10:31:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.598 10:31:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.598 10:31:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.598 10:31:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.598 10:31:37 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.598 10:31:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.598 10:31:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.598 10:31:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.598 10:31:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.598 10:31:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.598 10:31:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.598 10:31:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.598 10:31:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.861 10:31:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54353 00:03:11.861 10:31:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.861 10:31:37 -- pm/common@17 -- # local monitor 00:03:11.861 10:31:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.861 10:31:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.861 10:31:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.861 10:31:37 -- pm/common@25 -- # sleep 1 00:03:11.861 10:31:37 -- pm/common@21 -- # date +%s 00:03:11.861 10:31:37 -- pm/common@21 -- # date +%s 00:03:11.861 10:31:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731925897 00:03:11.861 10:31:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731925897 00:03:11.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731925897_collect-cpu-load.pm.log 00:03:11.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731925897_collect-vmstat.pm.log 00:03:12.800 10:31:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.800 10:31:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.800 10:31:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.800 10:31:38 -- common/autotest_common.sh@10 -- # set +x 00:03:12.800 10:31:38 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.800 10:31:38 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.800 10:31:38 -- common/autotest_common.sh@10 -- # set +x 00:03:12.800 10:31:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.800 10:31:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.800 10:31:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.800 10:31:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.800 10:31:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.800 10:31:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.800 10:31:38 -- common/autotest_common.sh@1457 -- # uname 00:03:12.800 10:31:38 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.800 10:31:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.800 10:31:38 -- common/autotest_common.sh@1477 -- # uname 00:03:12.800 10:31:38 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.800 10:31:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.800 10:31:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.800 lcov: LCOV version 1.15 00:03:12.800 10:31:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.684 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.583 10:32:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:42.583 10:32:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.583 10:32:06 -- common/autotest_common.sh@10 -- # set +x 00:03:42.583 10:32:06 -- spdk/autotest.sh@78 -- # rm -f 00:03:42.583 10:32:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.583 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:42.583 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:42.583 10:32:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:42.583 10:32:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:42.583 10:32:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:42.583 10:32:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:42.583 10:32:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.583 10:32:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:42.584 10:32:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:42.584 10:32:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.584 10:32:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:42.584 10:32:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:42.584 10:32:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.584 10:32:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:42.584 10:32:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:42.584 10:32:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:42.584 10:32:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:42.584 10:32:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:42.584 10:32:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:42.584 10:32:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:42.584 10:32:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:42.584 10:32:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.584 10:32:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.584 10:32:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:42.584 10:32:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:42.584 10:32:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.584 No valid GPT data, bailing 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # pt= 00:03:42.584 10:32:07 -- scripts/common.sh@395 -- # return 1 00:03:42.584 10:32:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.584 1+0 records in 00:03:42.584 1+0 records out 00:03:42.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621485 s, 169 MB/s 00:03:42.584 10:32:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.584 10:32:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.584 10:32:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:42.584 10:32:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:42.584 10:32:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:42.584 No valid GPT data, bailing 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # pt= 00:03:42.584 10:32:07 -- scripts/common.sh@395 -- # return 1 00:03:42.584 10:32:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:42.584 1+0 records in 00:03:42.584 1+0 records out 00:03:42.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617562 s, 170 MB/s 00:03:42.584 10:32:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.584 10:32:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.584 10:32:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:42.584 10:32:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:42.584 10:32:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:42.584 No valid GPT data, bailing 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # pt= 00:03:42.584 10:32:07 -- scripts/common.sh@395 -- # return 1 00:03:42.584 10:32:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:42.584 1+0 records in 00:03:42.584 1+0 records out 00:03:42.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648199 s, 162 MB/s 00:03:42.584 10:32:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:42.584 10:32:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:42.584 10:32:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:42.584 10:32:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:42.584 10:32:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:42.584 No valid GPT data, bailing 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:42.584 10:32:07 -- scripts/common.sh@394 -- # pt= 00:03:42.584 10:32:07 -- scripts/common.sh@395 -- # return 1 00:03:42.584 10:32:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:42.584 1+0 records in 00:03:42.584 1+0 records out 00:03:42.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421663 s, 249 MB/s 00:03:42.584 10:32:07 -- spdk/autotest.sh@105 -- # sync 00:03:42.584 10:32:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.584 10:32:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.584 10:32:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.143 10:32:10 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.143 10:32:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.143 10:32:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.143 10:32:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:45.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.713 Hugepages 00:03:45.713 node hugesize free / total 00:03:45.713 node0 1048576kB 0 / 0 00:03:45.713 node0 2048kB 0 / 0 00:03:45.713 00:03:45.713 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.713 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:45.974 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:45.974 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:45.974 10:32:11 -- spdk/autotest.sh@117 -- # uname -s 00:03:45.974 10:32:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:45.974 10:32:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:45.974 10:32:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.916 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.916 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.176 10:32:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:48.117 10:32:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:48.117 10:32:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:48.117 10:32:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.117 10:32:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:48.117 10:32:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.117 10:32:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.117 10:32:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.117 10:32:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:48.117 10:32:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.117 10:32:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:48.117 10:32:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:48.117 10:32:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.687 Waiting for block devices as requested 00:03:48.687 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:48.949 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:48.949 10:32:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:48.949 10:32:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:48.949 10:32:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:48.949 10:32:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:48.949 10:32:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:48.949 10:32:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1543 -- # continue 00:03:48.949 10:32:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:48.949 10:32:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:48.949 10:32:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:48.949 10:32:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:48.949 10:32:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:48.949 10:32:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:48.949 10:32:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:48.949 10:32:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:48.949 10:32:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:48.949 10:32:14 -- common/autotest_common.sh@1543 -- # continue 00:03:48.949 10:32:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:48.949 10:32:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.949 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:48.949 10:32:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:48.949 10:32:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.949 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.220 10:32:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.070 10:32:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:50.070 10:32:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.070 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:50.070 10:32:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:50.070 10:32:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:50.070 10:32:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.070 10:32:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:50.070 10:32:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:50.070 10:32:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:50.070 10:32:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:50.070 10:32:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:50.070 10:32:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:50.070 10:32:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:50.070 10:32:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.070 10:32:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:50.070 10:32:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:50.331 10:32:16 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:50.331 10:32:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:50.331 10:32:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.331 10:32:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:50.331 10:32:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.331 10:32:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.331 10:32:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.331 10:32:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:50.331 10:32:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.331 10:32:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.331 10:32:16 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:50.331 10:32:16 -- common/autotest_common.sh@1572 -- # return 0 00:03:50.331 10:32:16 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:50.331 10:32:16 -- common/autotest_common.sh@1580 -- # return 0 00:03:50.331 10:32:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.331 10:32:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.331 10:32:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.331 10:32:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.331 10:32:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.331 10:32:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.331 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:03:50.331 10:32:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.331 10:32:16 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.331 10:32:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.331 10:32:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.331 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:03:50.331 ************************************ 00:03:50.331 START TEST env 00:03:50.331 ************************************ 00:03:50.331 10:32:16 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.331 * Looking for test storage... 00:03:50.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:50.331 10:32:16 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.331 10:32:16 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.331 10:32:16 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.592 10:32:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.592 10:32:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.592 10:32:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.592 10:32:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.592 10:32:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.592 10:32:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.592 10:32:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.592 10:32:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.592 10:32:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.592 10:32:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.592 10:32:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.592 10:32:16 env -- scripts/common.sh@344 -- # case "$op" in 00:03:50.592 10:32:16 env -- scripts/common.sh@345 -- # : 1 00:03:50.592 10:32:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.592 10:32:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.592 10:32:16 env -- scripts/common.sh@365 -- # decimal 1 00:03:50.592 10:32:16 env -- scripts/common.sh@353 -- # local d=1 00:03:50.592 10:32:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.592 10:32:16 env -- scripts/common.sh@355 -- # echo 1 00:03:50.592 10:32:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.592 10:32:16 env -- scripts/common.sh@366 -- # decimal 2 00:03:50.592 10:32:16 env -- scripts/common.sh@353 -- # local d=2 00:03:50.592 10:32:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.592 10:32:16 env -- scripts/common.sh@355 -- # echo 2 00:03:50.592 10:32:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.592 10:32:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.592 10:32:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.592 10:32:16 env -- scripts/common.sh@368 -- # return 0 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.592 --rc genhtml_branch_coverage=1 00:03:50.592 --rc genhtml_function_coverage=1 00:03:50.592 --rc genhtml_legend=1 00:03:50.592 --rc geninfo_all_blocks=1 00:03:50.592 --rc geninfo_unexecuted_blocks=1 00:03:50.592 00:03:50.592 ' 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.592 --rc genhtml_branch_coverage=1 00:03:50.592 --rc genhtml_function_coverage=1 00:03:50.592 --rc genhtml_legend=1 00:03:50.592 --rc geninfo_all_blocks=1 00:03:50.592 --rc geninfo_unexecuted_blocks=1 00:03:50.592 00:03:50.592 ' 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.592 --rc genhtml_branch_coverage=1 00:03:50.592 --rc genhtml_function_coverage=1 00:03:50.592 --rc genhtml_legend=1 00:03:50.592 --rc geninfo_all_blocks=1 00:03:50.592 --rc geninfo_unexecuted_blocks=1 00:03:50.592 00:03:50.592 ' 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.592 --rc genhtml_branch_coverage=1 00:03:50.592 --rc genhtml_function_coverage=1 00:03:50.592 --rc genhtml_legend=1 00:03:50.592 --rc geninfo_all_blocks=1 00:03:50.592 --rc geninfo_unexecuted_blocks=1 00:03:50.592 00:03:50.592 ' 00:03:50.592 10:32:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.592 10:32:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.592 10:32:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.592 ************************************ 00:03:50.592 START TEST env_memory 00:03:50.592 ************************************ 00:03:50.592 10:32:16 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.592 00:03:50.592 00:03:50.592 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.592 http://cunit.sourceforge.net/ 00:03:50.592 00:03:50.592 00:03:50.592 Suite: memory 00:03:50.592 Test: alloc and free memory map ...[2024-11-18 10:32:16.367895] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.592 passed 00:03:50.592 Test: mem map translation ...[2024-11-18 10:32:16.410251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.592 [2024-11-18 10:32:16.410292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.592 [2024-11-18 10:32:16.410350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.592 [2024-11-18 10:32:16.410369] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.592 passed 00:03:50.592 Test: mem map registration ...[2024-11-18 10:32:16.472015] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:50.592 [2024-11-18 10:32:16.472049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:50.852 passed 00:03:50.852 Test: mem map adjacent registrations ...passed 00:03:50.852 00:03:50.852 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.852 suites 1 1 n/a 0 0 00:03:50.852 tests 4 4 4 0 0 00:03:50.852 asserts 152 152 152 0 n/a 00:03:50.853 00:03:50.853 Elapsed time = 0.227 seconds 00:03:50.853 00:03:50.853 real 0m0.280s 00:03:50.853 user 0m0.243s 00:03:50.853 sys 0m0.025s 00:03:50.853 10:32:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.853 10:32:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.853 ************************************ 00:03:50.853 END TEST env_memory 00:03:50.853 ************************************ 00:03:50.853 10:32:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.853 10:32:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.853 10:32:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.853 10:32:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.853 ************************************ 00:03:50.853 START TEST env_vtophys 00:03:50.853 ************************************ 00:03:50.853 10:32:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.853 EAL: lib.eal log level changed from notice to debug 00:03:50.853 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 1 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 2 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 3 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 4 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 5 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 6 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 7 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 8 as core 0 on socket 0 00:03:50.853 EAL: Detected lcore 9 as core 0 on socket 0 00:03:50.853 EAL: Maximum logical cores by configuration: 128 00:03:50.853 EAL: Detected CPU lcores: 10 00:03:50.853 EAL: Detected NUMA nodes: 1 00:03:50.853 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.853 EAL: Detected shared linkage of DPDK 00:03:50.853 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.853 EAL: Selected IOVA mode 'PA' 00:03:50.853 EAL: Probing VFIO support... 00:03:50.853 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:50.853 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:50.853 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.853 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:50.853 EAL: Setting up physically contiguous memory... 00:03:50.853 EAL: Setting maximum number of open files to 524288 00:03:50.853 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:50.853 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:50.853 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.853 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:50.853 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.853 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.853 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:50.853 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:50.853 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.853 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:50.853 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.853 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.853 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:50.853 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:50.853 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.853 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:50.853 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.853 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.853 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:50.853 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:50.853 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.853 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:50.853 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.853 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.853 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:50.853 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:50.853 EAL: Hugepages will be freed exactly as allocated. 00:03:50.853 EAL: No shared files mode enabled, IPC is disabled 00:03:50.853 EAL: No shared files mode enabled, IPC is disabled 00:03:51.113 EAL: TSC frequency is ~2290000 KHz 00:03:51.113 EAL: Main lcore 0 is ready (tid=7f7627736a40;cpuset=[0]) 00:03:51.113 EAL: Trying to obtain current memory policy. 00:03:51.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.113 EAL: Restoring previous memory policy: 0 00:03:51.113 EAL: request: mp_malloc_sync 00:03:51.113 EAL: No shared files mode enabled, IPC is disabled 00:03:51.113 EAL: Heap on socket 0 was expanded by 2MB 00:03:51.113 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:51.113 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:51.113 EAL: Mem event callback 'spdk:(nil)' registered 00:03:51.113 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:51.113 00:03:51.113 00:03:51.113 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.113 http://cunit.sourceforge.net/ 00:03:51.113 00:03:51.113 00:03:51.113 Suite: components_suite 00:03:51.680 Test: vtophys_malloc_test ...passed 00:03:51.680 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.680 EAL: Restoring previous memory policy: 4 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.680 EAL: Trying to obtain current memory policy. 00:03:51.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.680 EAL: Restoring previous memory policy: 4 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.680 EAL: Trying to obtain current memory policy. 00:03:51.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.680 EAL: Restoring previous memory policy: 4 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.680 EAL: Trying to obtain current memory policy. 00:03:51.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.680 EAL: Restoring previous memory policy: 4 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.680 EAL: Trying to obtain current memory policy. 00:03:51.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.680 EAL: Restoring previous memory policy: 4 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.680 EAL: request: mp_malloc_sync 00:03:51.680 EAL: No shared files mode enabled, IPC is disabled 00:03:51.680 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.680 EAL: Trying to obtain current memory policy. 00:03:51.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.939 EAL: Restoring previous memory policy: 4 00:03:51.939 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.939 EAL: request: mp_malloc_sync 00:03:51.939 EAL: No shared files mode enabled, IPC is disabled 00:03:51.939 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.939 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.939 EAL: request: mp_malloc_sync 00:03:51.939 EAL: No shared files mode enabled, IPC is disabled 00:03:51.939 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.939 EAL: Trying to obtain current memory policy. 00:03:51.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.198 EAL: Restoring previous memory policy: 4 00:03:52.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.198 EAL: request: mp_malloc_sync 00:03:52.198 EAL: No shared files mode enabled, IPC is disabled 00:03:52.198 EAL: Heap on socket 0 was expanded by 130MB 00:03:52.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.459 EAL: request: mp_malloc_sync 00:03:52.459 EAL: No shared files mode enabled, IPC is disabled 00:03:52.459 EAL: Heap on socket 0 was shrunk by 130MB 00:03:52.459 EAL: Trying to obtain current memory policy. 00:03:52.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.719 EAL: Restoring previous memory policy: 4 00:03:52.719 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.719 EAL: request: mp_malloc_sync 00:03:52.719 EAL: No shared files mode enabled, IPC is disabled 00:03:52.719 EAL: Heap on socket 0 was expanded by 258MB 00:03:52.986 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.246 EAL: request: mp_malloc_sync 00:03:53.246 EAL: No shared files mode enabled, IPC is disabled 00:03:53.246 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.512 EAL: Trying to obtain current memory policy. 00:03:53.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.774 EAL: Restoring previous memory policy: 4 00:03:53.774 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.774 EAL: request: mp_malloc_sync 00:03:53.774 EAL: No shared files mode enabled, IPC is disabled 00:03:53.774 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.714 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.973 EAL: request: mp_malloc_sync 00:03:54.973 EAL: No shared files mode enabled, IPC is disabled 00:03:54.973 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.912 EAL: Trying to obtain current memory policy. 00:03:55.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.172 EAL: Restoring previous memory policy: 4 00:03:56.172 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.172 EAL: request: mp_malloc_sync 00:03:56.172 EAL: No shared files mode enabled, IPC is disabled 00:03:56.172 EAL: Heap on socket 0 was expanded by 1026MB 00:03:58.206 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.206 EAL: request: mp_malloc_sync 00:03:58.206 EAL: No shared files mode enabled, IPC is disabled 00:03:58.206 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:00.116 passed 00:04:00.116 00:04:00.116 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.116 suites 1 1 n/a 0 0 00:04:00.116 tests 2 2 2 0 0 00:04:00.116 asserts 5810 5810 5810 0 n/a 00:04:00.116 00:04:00.116 Elapsed time = 8.671 seconds 00:04:00.116 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.116 EAL: request: mp_malloc_sync 00:04:00.116 EAL: No shared files mode enabled, IPC is disabled 00:04:00.116 EAL: Heap on socket 0 was shrunk by 2MB 00:04:00.116 EAL: No shared files mode enabled, IPC is disabled 00:04:00.116 EAL: No shared files mode enabled, IPC is disabled 00:04:00.116 EAL: No shared files mode enabled, IPC is disabled 00:04:00.116 00:04:00.116 real 0m9.001s 00:04:00.116 user 0m7.652s 00:04:00.116 sys 0m1.196s 00:04:00.116 10:32:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.116 10:32:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:00.116 ************************************ 00:04:00.116 END TEST env_vtophys 00:04:00.116 ************************************ 00:04:00.116 10:32:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:00.116 10:32:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.116 10:32:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.116 10:32:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.116 ************************************ 00:04:00.116 START TEST env_pci 00:04:00.116 ************************************ 00:04:00.116 10:32:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:00.116 00:04:00.116 00:04:00.116 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.116 http://cunit.sourceforge.net/ 00:04:00.116 00:04:00.116 00:04:00.116 Suite: pci 00:04:00.116 Test: pci_hook ...[2024-11-18 10:32:25.763477] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56655 has claimed it 00:04:00.116 passed 00:04:00.116 00:04:00.116 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.116 suites 1 1 n/a 0 0 00:04:00.116 tests 1 1 1 0 0 00:04:00.116 asserts 25 25 25 0 n/a 00:04:00.116 00:04:00.116 Elapsed time = 0.006 seconds 00:04:00.116 EAL: Cannot find device (10000:00:01.0) 00:04:00.116 EAL: Failed to attach device on primary process 00:04:00.116 00:04:00.116 real 0m0.107s 00:04:00.116 user 0m0.047s 00:04:00.116 sys 0m0.059s 00:04:00.116 10:32:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.116 10:32:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:00.116 ************************************ 00:04:00.116 END TEST env_pci 00:04:00.116 ************************************ 00:04:00.116 10:32:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:00.116 10:32:25 env -- env/env.sh@15 -- # uname 00:04:00.116 10:32:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:00.116 10:32:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:00.116 10:32:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.116 10:32:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:00.116 10:32:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.116 10:32:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.116 ************************************ 00:04:00.116 START TEST env_dpdk_post_init 00:04:00.116 ************************************ 00:04:00.116 10:32:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.116 EAL: Detected CPU lcores: 10 00:04:00.116 EAL: Detected NUMA nodes: 1 00:04:00.116 EAL: Detected shared linkage of DPDK 00:04:00.116 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.116 EAL: Selected IOVA mode 'PA' 00:04:00.377 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.377 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:00.377 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:00.377 Starting DPDK initialization... 00:04:00.377 Starting SPDK post initialization... 00:04:00.377 SPDK NVMe probe 00:04:00.377 Attaching to 0000:00:10.0 00:04:00.377 Attaching to 0000:00:11.0 00:04:00.377 Attached to 0000:00:10.0 00:04:00.377 Attached to 0000:00:11.0 00:04:00.377 Cleaning up... 00:04:00.377 00:04:00.377 real 0m0.298s 00:04:00.377 user 0m0.082s 00:04:00.377 sys 0m0.118s 00:04:00.377 10:32:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.377 10:32:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.377 ************************************ 00:04:00.377 END TEST env_dpdk_post_init 00:04:00.377 ************************************ 00:04:00.638 10:32:26 env -- env/env.sh@26 -- # uname 00:04:00.638 10:32:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.638 10:32:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.638 10:32:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.638 10:32:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.638 10:32:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.638 ************************************ 00:04:00.638 START TEST env_mem_callbacks 00:04:00.638 ************************************ 00:04:00.638 10:32:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.638 EAL: Detected CPU lcores: 10 00:04:00.638 EAL: Detected NUMA nodes: 1 00:04:00.638 EAL: Detected shared linkage of DPDK 00:04:00.638 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.638 EAL: Selected IOVA mode 'PA' 00:04:00.638 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.638 00:04:00.638 00:04:00.638 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.638 http://cunit.sourceforge.net/ 00:04:00.638 00:04:00.638 00:04:00.638 Suite: memory 00:04:00.638 Test: test ... 00:04:00.638 register 0x200000200000 2097152 00:04:00.638 malloc 3145728 00:04:00.638 register 0x200000400000 4194304 00:04:00.638 buf 0x2000004fffc0 len 3145728 PASSED 00:04:00.638 malloc 64 00:04:00.638 buf 0x2000004ffec0 len 64 PASSED 00:04:00.638 malloc 4194304 00:04:00.638 register 0x200000800000 6291456 00:04:00.638 buf 0x2000009fffc0 len 4194304 PASSED 00:04:00.638 free 0x2000004fffc0 3145728 00:04:00.638 free 0x2000004ffec0 64 00:04:00.638 unregister 0x200000400000 4194304 PASSED 00:04:00.638 free 0x2000009fffc0 4194304 00:04:00.638 unregister 0x200000800000 6291456 PASSED 00:04:00.898 malloc 8388608 00:04:00.898 register 0x200000400000 10485760 00:04:00.898 buf 0x2000005fffc0 len 8388608 PASSED 00:04:00.898 free 0x2000005fffc0 8388608 00:04:00.898 unregister 0x200000400000 10485760 PASSED 00:04:00.898 passed 00:04:00.898 00:04:00.898 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.898 suites 1 1 n/a 0 0 00:04:00.898 tests 1 1 1 0 0 00:04:00.898 asserts 15 15 15 0 n/a 00:04:00.898 00:04:00.898 Elapsed time = 0.085 seconds 00:04:00.898 00:04:00.898 real 0m0.300s 00:04:00.898 user 0m0.120s 00:04:00.898 sys 0m0.077s 00:04:00.898 10:32:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.898 10:32:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.898 ************************************ 00:04:00.898 END TEST env_mem_callbacks 00:04:00.898 ************************************ 00:04:00.898 00:04:00.898 real 0m10.583s 00:04:00.898 user 0m8.378s 00:04:00.898 sys 0m1.852s 00:04:00.898 10:32:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.898 10:32:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.898 ************************************ 00:04:00.898 END TEST env 00:04:00.898 ************************************ 00:04:00.898 10:32:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.898 10:32:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.898 10:32:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.898 10:32:26 -- common/autotest_common.sh@10 -- # set +x 00:04:00.898 ************************************ 00:04:00.898 START TEST rpc 00:04:00.898 ************************************ 00:04:00.898 10:32:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:01.158 * Looking for test storage... 00:04:01.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.158 10:32:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.158 10:32:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.158 10:32:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.158 10:32:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.158 10:32:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.158 10:32:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.158 10:32:26 rpc -- scripts/common.sh@345 -- # : 1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.158 10:32:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.158 10:32:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.158 10:32:26 rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.158 10:32:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.158 10:32:26 rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.158 10:32:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.158 10:32:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.158 10:32:26 rpc -- scripts/common.sh@368 -- # return 0 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.158 --rc genhtml_branch_coverage=1 00:04:01.158 --rc genhtml_function_coverage=1 00:04:01.158 --rc genhtml_legend=1 00:04:01.158 --rc geninfo_all_blocks=1 00:04:01.158 --rc geninfo_unexecuted_blocks=1 00:04:01.158 00:04:01.158 ' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.158 --rc genhtml_branch_coverage=1 00:04:01.158 --rc genhtml_function_coverage=1 00:04:01.158 --rc genhtml_legend=1 00:04:01.158 --rc geninfo_all_blocks=1 00:04:01.158 --rc geninfo_unexecuted_blocks=1 00:04:01.158 00:04:01.158 ' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.158 --rc genhtml_branch_coverage=1 00:04:01.158 --rc genhtml_function_coverage=1 00:04:01.158 --rc genhtml_legend=1 00:04:01.158 --rc geninfo_all_blocks=1 00:04:01.158 --rc geninfo_unexecuted_blocks=1 00:04:01.158 00:04:01.158 ' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.158 --rc genhtml_branch_coverage=1 00:04:01.158 --rc genhtml_function_coverage=1 00:04:01.158 --rc genhtml_legend=1 00:04:01.158 --rc geninfo_all_blocks=1 00:04:01.158 --rc geninfo_unexecuted_blocks=1 00:04:01.158 00:04:01.158 ' 00:04:01.158 10:32:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56788 00:04:01.158 10:32:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.158 10:32:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56788 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 56788 ']' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.158 10:32:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.158 10:32:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.158 [2024-11-18 10:32:27.034797] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:01.158 [2024-11-18 10:32:27.034911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56788 ] 00:04:01.418 [2024-11-18 10:32:27.207519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.678 [2024-11-18 10:32:27.333959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:01.678 [2024-11-18 10:32:27.334036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56788' to capture a snapshot of events at runtime. 00:04:01.678 [2024-11-18 10:32:27.334050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:01.678 [2024-11-18 10:32:27.334063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:01.678 [2024-11-18 10:32:27.334073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56788 for offline analysis/debug. 00:04:01.678 [2024-11-18 10:32:27.335379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.620 10:32:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.620 10:32:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:02.620 10:32:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.620 10:32:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.620 10:32:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:02.620 10:32:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:02.620 10:32:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.620 10:32:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.620 10:32:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.620 ************************************ 00:04:02.620 START TEST rpc_integrity 00:04:02.620 ************************************ 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.620 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.620 { 00:04:02.620 "name": "Malloc0", 00:04:02.620 "aliases": [ 00:04:02.620 "7eeeaa5f-0702-445c-882d-25bd9a4a28fd" 00:04:02.620 ], 00:04:02.620 "product_name": "Malloc disk", 00:04:02.620 "block_size": 512, 00:04:02.620 "num_blocks": 16384, 00:04:02.620 "uuid": "7eeeaa5f-0702-445c-882d-25bd9a4a28fd", 00:04:02.620 "assigned_rate_limits": { 00:04:02.620 "rw_ios_per_sec": 0, 00:04:02.620 "rw_mbytes_per_sec": 0, 00:04:02.620 "r_mbytes_per_sec": 0, 00:04:02.620 "w_mbytes_per_sec": 0 00:04:02.620 }, 00:04:02.620 "claimed": false, 00:04:02.620 "zoned": false, 00:04:02.620 "supported_io_types": { 00:04:02.620 "read": true, 00:04:02.620 "write": true, 00:04:02.620 "unmap": true, 00:04:02.620 "flush": true, 00:04:02.620 "reset": true, 00:04:02.620 "nvme_admin": false, 00:04:02.620 "nvme_io": false, 00:04:02.620 "nvme_io_md": false, 00:04:02.620 "write_zeroes": true, 00:04:02.620 "zcopy": true, 00:04:02.620 "get_zone_info": false, 00:04:02.620 "zone_management": false, 00:04:02.620 "zone_append": false, 00:04:02.620 "compare": false, 00:04:02.620 "compare_and_write": false, 00:04:02.620 "abort": true, 00:04:02.620 "seek_hole": false, 00:04:02.620 "seek_data": false, 00:04:02.620 "copy": true, 00:04:02.620 "nvme_iov_md": false 00:04:02.620 }, 00:04:02.620 "memory_domains": [ 00:04:02.620 { 00:04:02.620 "dma_device_id": "system", 00:04:02.620 "dma_device_type": 1 00:04:02.620 }, 00:04:02.620 { 00:04:02.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.620 "dma_device_type": 2 00:04:02.620 } 00:04:02.620 ], 00:04:02.620 "driver_specific": {} 00:04:02.620 } 00:04:02.620 ]' 00:04:02.620 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.880 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.880 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:02.880 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.880 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.880 [2024-11-18 10:32:28.519861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:02.880 [2024-11-18 10:32:28.519933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.880 [2024-11-18 10:32:28.519975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:02.880 [2024-11-18 10:32:28.520004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.880 [2024-11-18 10:32:28.522449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.880 [2024-11-18 10:32:28.522494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.880 Passthru0 00:04:02.880 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.880 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.880 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.880 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.880 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.880 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.880 { 00:04:02.880 "name": "Malloc0", 00:04:02.880 "aliases": [ 00:04:02.880 "7eeeaa5f-0702-445c-882d-25bd9a4a28fd" 00:04:02.880 ], 00:04:02.880 "product_name": "Malloc disk", 00:04:02.880 "block_size": 512, 00:04:02.880 "num_blocks": 16384, 00:04:02.880 "uuid": "7eeeaa5f-0702-445c-882d-25bd9a4a28fd", 00:04:02.880 "assigned_rate_limits": { 00:04:02.880 "rw_ios_per_sec": 0, 00:04:02.880 "rw_mbytes_per_sec": 0, 00:04:02.880 "r_mbytes_per_sec": 0, 00:04:02.880 "w_mbytes_per_sec": 0 00:04:02.880 }, 00:04:02.880 "claimed": true, 00:04:02.880 "claim_type": "exclusive_write", 00:04:02.880 "zoned": false, 00:04:02.880 "supported_io_types": { 00:04:02.880 "read": true, 00:04:02.880 "write": true, 00:04:02.880 "unmap": true, 00:04:02.880 "flush": true, 00:04:02.880 "reset": true, 00:04:02.880 "nvme_admin": false, 00:04:02.880 "nvme_io": false, 00:04:02.880 "nvme_io_md": false, 00:04:02.880 "write_zeroes": true, 00:04:02.880 "zcopy": true, 00:04:02.880 "get_zone_info": false, 00:04:02.881 "zone_management": false, 00:04:02.881 "zone_append": false, 00:04:02.881 "compare": false, 00:04:02.881 "compare_and_write": false, 00:04:02.881 "abort": true, 00:04:02.881 "seek_hole": false, 00:04:02.881 "seek_data": false, 00:04:02.881 "copy": true, 00:04:02.881 "nvme_iov_md": false 00:04:02.881 }, 00:04:02.881 "memory_domains": [ 00:04:02.881 { 00:04:02.881 "dma_device_id": "system", 00:04:02.881 "dma_device_type": 1 00:04:02.881 }, 00:04:02.881 { 00:04:02.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.881 "dma_device_type": 2 00:04:02.881 } 00:04:02.881 ], 00:04:02.881 "driver_specific": {} 00:04:02.881 }, 00:04:02.881 { 00:04:02.881 "name": "Passthru0", 00:04:02.881 "aliases": [ 00:04:02.881 "32acf11a-3938-5db1-bc73-d811c17f970b" 00:04:02.881 ], 00:04:02.881 "product_name": "passthru", 00:04:02.881 "block_size": 512, 00:04:02.881 "num_blocks": 16384, 00:04:02.881 "uuid": "32acf11a-3938-5db1-bc73-d811c17f970b", 00:04:02.881 "assigned_rate_limits": { 00:04:02.881 "rw_ios_per_sec": 0, 00:04:02.881 "rw_mbytes_per_sec": 0, 00:04:02.881 "r_mbytes_per_sec": 0, 00:04:02.881 "w_mbytes_per_sec": 0 00:04:02.881 }, 00:04:02.881 "claimed": false, 00:04:02.881 "zoned": false, 00:04:02.881 "supported_io_types": { 00:04:02.881 "read": true, 00:04:02.881 "write": true, 00:04:02.881 "unmap": true, 00:04:02.881 "flush": true, 00:04:02.881 "reset": true, 00:04:02.881 "nvme_admin": false, 00:04:02.881 "nvme_io": false, 00:04:02.881 "nvme_io_md": false, 00:04:02.881 "write_zeroes": true, 00:04:02.881 "zcopy": true, 00:04:02.881 "get_zone_info": false, 00:04:02.881 "zone_management": false, 00:04:02.881 "zone_append": false, 00:04:02.881 "compare": false, 00:04:02.881 "compare_and_write": false, 00:04:02.881 "abort": true, 00:04:02.881 "seek_hole": false, 00:04:02.881 "seek_data": false, 00:04:02.881 "copy": true, 00:04:02.881 "nvme_iov_md": false 00:04:02.881 }, 00:04:02.881 "memory_domains": [ 00:04:02.881 { 00:04:02.881 "dma_device_id": "system", 00:04:02.881 "dma_device_type": 1 00:04:02.881 }, 00:04:02.881 { 00:04:02.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.881 "dma_device_type": 2 00:04:02.881 } 00:04:02.881 ], 00:04:02.881 "driver_specific": { 00:04:02.881 "passthru": { 00:04:02.881 "name": "Passthru0", 00:04:02.881 "base_bdev_name": "Malloc0" 00:04:02.881 } 00:04:02.881 } 00:04:02.881 } 00:04:02.881 ]' 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.881 10:32:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.881 00:04:02.881 real 0m0.373s 00:04:02.881 user 0m0.211s 00:04:02.881 sys 0m0.047s 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.881 10:32:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.881 ************************************ 00:04:02.881 END TEST rpc_integrity 00:04:02.881 ************************************ 00:04:02.881 10:32:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:02.881 10:32:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.881 10:32:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.881 10:32:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 ************************************ 00:04:03.141 START TEST rpc_plugins 00:04:03.141 ************************************ 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.141 { 00:04:03.141 "name": "Malloc1", 00:04:03.141 "aliases": [ 00:04:03.141 "ef2bf082-1d3b-4e98-a8bd-178e15dcab73" 00:04:03.141 ], 00:04:03.141 "product_name": "Malloc disk", 00:04:03.141 "block_size": 4096, 00:04:03.141 "num_blocks": 256, 00:04:03.141 "uuid": "ef2bf082-1d3b-4e98-a8bd-178e15dcab73", 00:04:03.141 "assigned_rate_limits": { 00:04:03.141 "rw_ios_per_sec": 0, 00:04:03.141 "rw_mbytes_per_sec": 0, 00:04:03.141 "r_mbytes_per_sec": 0, 00:04:03.141 "w_mbytes_per_sec": 0 00:04:03.141 }, 00:04:03.141 "claimed": false, 00:04:03.141 "zoned": false, 00:04:03.141 "supported_io_types": { 00:04:03.141 "read": true, 00:04:03.141 "write": true, 00:04:03.141 "unmap": true, 00:04:03.141 "flush": true, 00:04:03.141 "reset": true, 00:04:03.141 "nvme_admin": false, 00:04:03.141 "nvme_io": false, 00:04:03.141 "nvme_io_md": false, 00:04:03.141 "write_zeroes": true, 00:04:03.141 "zcopy": true, 00:04:03.141 "get_zone_info": false, 00:04:03.141 "zone_management": false, 00:04:03.141 "zone_append": false, 00:04:03.141 "compare": false, 00:04:03.141 "compare_and_write": false, 00:04:03.141 "abort": true, 00:04:03.141 "seek_hole": false, 00:04:03.141 "seek_data": false, 00:04:03.141 "copy": true, 00:04:03.141 "nvme_iov_md": false 00:04:03.141 }, 00:04:03.141 "memory_domains": [ 00:04:03.141 { 00:04:03.141 "dma_device_id": "system", 00:04:03.141 "dma_device_type": 1 00:04:03.141 }, 00:04:03.141 { 00:04:03.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.141 "dma_device_type": 2 00:04:03.141 } 00:04:03.141 ], 00:04:03.141 "driver_specific": {} 00:04:03.141 } 00:04:03.141 ]' 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.141 10:32:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.141 00:04:03.141 real 0m0.160s 00:04:03.141 user 0m0.089s 00:04:03.141 sys 0m0.029s 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.141 10:32:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.141 ************************************ 00:04:03.141 END TEST rpc_plugins 00:04:03.141 ************************************ 00:04:03.141 10:32:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.141 10:32:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.141 10:32:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.142 10:32:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.142 ************************************ 00:04:03.142 START TEST rpc_trace_cmd_test 00:04:03.142 ************************************ 00:04:03.142 10:32:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:03.142 10:32:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.142 10:32:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.142 10:32:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.142 10:32:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.142 10:32:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.142 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.142 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56788", 00:04:03.142 "tpoint_group_mask": "0x8", 00:04:03.142 "iscsi_conn": { 00:04:03.142 "mask": "0x2", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "scsi": { 00:04:03.142 "mask": "0x4", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "bdev": { 00:04:03.142 "mask": "0x8", 00:04:03.142 "tpoint_mask": "0xffffffffffffffff" 00:04:03.142 }, 00:04:03.142 "nvmf_rdma": { 00:04:03.142 "mask": "0x10", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "nvmf_tcp": { 00:04:03.142 "mask": "0x20", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "ftl": { 00:04:03.142 "mask": "0x40", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "blobfs": { 00:04:03.142 "mask": "0x80", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "dsa": { 00:04:03.142 "mask": "0x200", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "thread": { 00:04:03.142 "mask": "0x400", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "nvme_pcie": { 00:04:03.142 "mask": "0x800", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "iaa": { 00:04:03.142 "mask": "0x1000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "nvme_tcp": { 00:04:03.142 "mask": "0x2000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "bdev_nvme": { 00:04:03.142 "mask": "0x4000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "sock": { 00:04:03.142 "mask": "0x8000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "blob": { 00:04:03.142 "mask": "0x10000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "bdev_raid": { 00:04:03.142 "mask": "0x20000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 }, 00:04:03.142 "scheduler": { 00:04:03.142 "mask": "0x40000", 00:04:03.142 "tpoint_mask": "0x0" 00:04:03.142 } 00:04:03.142 }' 00:04:03.142 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.402 00:04:03.402 real 0m0.219s 00:04:03.402 user 0m0.174s 00:04:03.402 sys 0m0.036s 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.402 10:32:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.402 ************************************ 00:04:03.402 END TEST rpc_trace_cmd_test 00:04:03.402 ************************************ 00:04:03.402 10:32:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.402 10:32:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.402 10:32:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.402 10:32:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.402 10:32:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.402 10:32:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.402 ************************************ 00:04:03.402 START TEST rpc_daemon_integrity 00:04:03.402 ************************************ 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.402 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.662 { 00:04:03.662 "name": "Malloc2", 00:04:03.662 "aliases": [ 00:04:03.662 "556ab30b-9b49-4bed-b6f3-e39f4e5571a0" 00:04:03.662 ], 00:04:03.662 "product_name": "Malloc disk", 00:04:03.662 "block_size": 512, 00:04:03.662 "num_blocks": 16384, 00:04:03.662 "uuid": "556ab30b-9b49-4bed-b6f3-e39f4e5571a0", 00:04:03.662 "assigned_rate_limits": { 00:04:03.662 "rw_ios_per_sec": 0, 00:04:03.662 "rw_mbytes_per_sec": 0, 00:04:03.662 "r_mbytes_per_sec": 0, 00:04:03.662 "w_mbytes_per_sec": 0 00:04:03.662 }, 00:04:03.662 "claimed": false, 00:04:03.662 "zoned": false, 00:04:03.662 "supported_io_types": { 00:04:03.662 "read": true, 00:04:03.662 "write": true, 00:04:03.662 "unmap": true, 00:04:03.662 "flush": true, 00:04:03.662 "reset": true, 00:04:03.662 "nvme_admin": false, 00:04:03.662 "nvme_io": false, 00:04:03.662 "nvme_io_md": false, 00:04:03.662 "write_zeroes": true, 00:04:03.662 "zcopy": true, 00:04:03.662 "get_zone_info": false, 00:04:03.662 "zone_management": false, 00:04:03.662 "zone_append": false, 00:04:03.662 "compare": false, 00:04:03.662 "compare_and_write": false, 00:04:03.662 "abort": true, 00:04:03.662 "seek_hole": false, 00:04:03.662 "seek_data": false, 00:04:03.662 "copy": true, 00:04:03.662 "nvme_iov_md": false 00:04:03.662 }, 00:04:03.662 "memory_domains": [ 00:04:03.662 { 00:04:03.662 "dma_device_id": "system", 00:04:03.662 "dma_device_type": 1 00:04:03.662 }, 00:04:03.662 { 00:04:03.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.662 "dma_device_type": 2 00:04:03.662 } 00:04:03.662 ], 00:04:03.662 "driver_specific": {} 00:04:03.662 } 00:04:03.662 ]' 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.662 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.662 [2024-11-18 10:32:29.423851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:03.662 [2024-11-18 10:32:29.423921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.662 [2024-11-18 10:32:29.423960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:03.662 [2024-11-18 10:32:29.423983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.662 [2024-11-18 10:32:29.426369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.663 [2024-11-18 10:32:29.426410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.663 Passthru0 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.663 { 00:04:03.663 "name": "Malloc2", 00:04:03.663 "aliases": [ 00:04:03.663 "556ab30b-9b49-4bed-b6f3-e39f4e5571a0" 00:04:03.663 ], 00:04:03.663 "product_name": "Malloc disk", 00:04:03.663 "block_size": 512, 00:04:03.663 "num_blocks": 16384, 00:04:03.663 "uuid": "556ab30b-9b49-4bed-b6f3-e39f4e5571a0", 00:04:03.663 "assigned_rate_limits": { 00:04:03.663 "rw_ios_per_sec": 0, 00:04:03.663 "rw_mbytes_per_sec": 0, 00:04:03.663 "r_mbytes_per_sec": 0, 00:04:03.663 "w_mbytes_per_sec": 0 00:04:03.663 }, 00:04:03.663 "claimed": true, 00:04:03.663 "claim_type": "exclusive_write", 00:04:03.663 "zoned": false, 00:04:03.663 "supported_io_types": { 00:04:03.663 "read": true, 00:04:03.663 "write": true, 00:04:03.663 "unmap": true, 00:04:03.663 "flush": true, 00:04:03.663 "reset": true, 00:04:03.663 "nvme_admin": false, 00:04:03.663 "nvme_io": false, 00:04:03.663 "nvme_io_md": false, 00:04:03.663 "write_zeroes": true, 00:04:03.663 "zcopy": true, 00:04:03.663 "get_zone_info": false, 00:04:03.663 "zone_management": false, 00:04:03.663 "zone_append": false, 00:04:03.663 "compare": false, 00:04:03.663 "compare_and_write": false, 00:04:03.663 "abort": true, 00:04:03.663 "seek_hole": false, 00:04:03.663 "seek_data": false, 00:04:03.663 "copy": true, 00:04:03.663 "nvme_iov_md": false 00:04:03.663 }, 00:04:03.663 "memory_domains": [ 00:04:03.663 { 00:04:03.663 "dma_device_id": "system", 00:04:03.663 "dma_device_type": 1 00:04:03.663 }, 00:04:03.663 { 00:04:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.663 "dma_device_type": 2 00:04:03.663 } 00:04:03.663 ], 00:04:03.663 "driver_specific": {} 00:04:03.663 }, 00:04:03.663 { 00:04:03.663 "name": "Passthru0", 00:04:03.663 "aliases": [ 00:04:03.663 "711f4a79-9e88-5a63-8d5c-9a93dbb5afa3" 00:04:03.663 ], 00:04:03.663 "product_name": "passthru", 00:04:03.663 "block_size": 512, 00:04:03.663 "num_blocks": 16384, 00:04:03.663 "uuid": "711f4a79-9e88-5a63-8d5c-9a93dbb5afa3", 00:04:03.663 "assigned_rate_limits": { 00:04:03.663 "rw_ios_per_sec": 0, 00:04:03.663 "rw_mbytes_per_sec": 0, 00:04:03.663 "r_mbytes_per_sec": 0, 00:04:03.663 "w_mbytes_per_sec": 0 00:04:03.663 }, 00:04:03.663 "claimed": false, 00:04:03.663 "zoned": false, 00:04:03.663 "supported_io_types": { 00:04:03.663 "read": true, 00:04:03.663 "write": true, 00:04:03.663 "unmap": true, 00:04:03.663 "flush": true, 00:04:03.663 "reset": true, 00:04:03.663 "nvme_admin": false, 00:04:03.663 "nvme_io": false, 00:04:03.663 "nvme_io_md": false, 00:04:03.663 "write_zeroes": true, 00:04:03.663 "zcopy": true, 00:04:03.663 "get_zone_info": false, 00:04:03.663 "zone_management": false, 00:04:03.663 "zone_append": false, 00:04:03.663 "compare": false, 00:04:03.663 "compare_and_write": false, 00:04:03.663 "abort": true, 00:04:03.663 "seek_hole": false, 00:04:03.663 "seek_data": false, 00:04:03.663 "copy": true, 00:04:03.663 "nvme_iov_md": false 00:04:03.663 }, 00:04:03.663 "memory_domains": [ 00:04:03.663 { 00:04:03.663 "dma_device_id": "system", 00:04:03.663 "dma_device_type": 1 00:04:03.663 }, 00:04:03.663 { 00:04:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.663 "dma_device_type": 2 00:04:03.663 } 00:04:03.663 ], 00:04:03.663 "driver_specific": { 00:04:03.663 "passthru": { 00:04:03.663 "name": "Passthru0", 00:04:03.663 "base_bdev_name": "Malloc2" 00:04:03.663 } 00:04:03.663 } 00:04:03.663 } 00:04:03.663 ]' 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.663 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.923 00:04:03.923 real 0m0.358s 00:04:03.923 user 0m0.198s 00:04:03.923 sys 0m0.059s 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.923 10:32:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.923 ************************************ 00:04:03.923 END TEST rpc_daemon_integrity 00:04:03.923 ************************************ 00:04:03.923 10:32:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.923 10:32:29 rpc -- rpc/rpc.sh@84 -- # killprocess 56788 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 56788 ']' 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@958 -- # kill -0 56788 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@959 -- # uname 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56788 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.923 killing process with pid 56788 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56788' 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@973 -- # kill 56788 00:04:03.923 10:32:29 rpc -- common/autotest_common.sh@978 -- # wait 56788 00:04:06.464 00:04:06.464 real 0m5.487s 00:04:06.464 user 0m5.782s 00:04:06.464 sys 0m1.095s 00:04:06.464 10:32:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.464 10:32:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.464 ************************************ 00:04:06.464 END TEST rpc 00:04:06.464 ************************************ 00:04:06.464 10:32:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:06.464 10:32:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.464 10:32:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.464 10:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.464 ************************************ 00:04:06.464 START TEST skip_rpc 00:04:06.464 ************************************ 00:04:06.465 10:32:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:06.724 * Looking for test storage... 00:04:06.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.724 10:32:32 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.724 10:32:32 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.724 10:32:32 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.724 10:32:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.724 10:32:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.725 10:32:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.725 --rc genhtml_branch_coverage=1 00:04:06.725 --rc genhtml_function_coverage=1 00:04:06.725 --rc genhtml_legend=1 00:04:06.725 --rc geninfo_all_blocks=1 00:04:06.725 --rc geninfo_unexecuted_blocks=1 00:04:06.725 00:04:06.725 ' 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.725 --rc genhtml_branch_coverage=1 00:04:06.725 --rc genhtml_function_coverage=1 00:04:06.725 --rc genhtml_legend=1 00:04:06.725 --rc geninfo_all_blocks=1 00:04:06.725 --rc geninfo_unexecuted_blocks=1 00:04:06.725 00:04:06.725 ' 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.725 --rc genhtml_branch_coverage=1 00:04:06.725 --rc genhtml_function_coverage=1 00:04:06.725 --rc genhtml_legend=1 00:04:06.725 --rc geninfo_all_blocks=1 00:04:06.725 --rc geninfo_unexecuted_blocks=1 00:04:06.725 00:04:06.725 ' 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.725 --rc genhtml_branch_coverage=1 00:04:06.725 --rc genhtml_function_coverage=1 00:04:06.725 --rc genhtml_legend=1 00:04:06.725 --rc geninfo_all_blocks=1 00:04:06.725 --rc geninfo_unexecuted_blocks=1 00:04:06.725 00:04:06.725 ' 00:04:06.725 10:32:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.725 10:32:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:06.725 10:32:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.725 10:32:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.725 ************************************ 00:04:06.725 START TEST skip_rpc 00:04:06.725 ************************************ 00:04:06.725 10:32:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:06.725 10:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57017 00:04:06.725 10:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.725 10:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.725 10:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.985 [2024-11-18 10:32:32.619053] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:06.985 [2024-11-18 10:32:32.619203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57017 ] 00:04:06.985 [2024-11-18 10:32:32.795493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.245 [2024-11-18 10:32:32.928621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57017 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57017 ']' 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57017 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57017 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.527 killing process with pid 57017 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57017' 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57017 00:04:12.527 10:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57017 00:04:14.481 00:04:14.481 real 0m7.555s 00:04:14.481 user 0m6.932s 00:04:14.481 sys 0m0.542s 00:04:14.481 10:32:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.481 10:32:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.481 ************************************ 00:04:14.481 END TEST skip_rpc 00:04:14.481 ************************************ 00:04:14.481 10:32:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.481 10:32:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.481 10:32:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.481 10:32:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.481 ************************************ 00:04:14.481 START TEST skip_rpc_with_json 00:04:14.481 ************************************ 00:04:14.481 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:14.481 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57121 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57121 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57121 ']' 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.482 10:32:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.482 [2024-11-18 10:32:40.250728] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:14.482 [2024-11-18 10:32:40.250874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57121 ] 00:04:14.742 [2024-11-18 10:32:40.428471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.742 [2024-11-18 10:32:40.556182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.126 [2024-11-18 10:32:41.582270] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.126 request: 00:04:16.126 { 00:04:16.126 "trtype": "tcp", 00:04:16.126 "method": "nvmf_get_transports", 00:04:16.126 "req_id": 1 00:04:16.126 } 00:04:16.126 Got JSON-RPC error response 00:04:16.126 response: 00:04:16.126 { 00:04:16.126 "code": -19, 00:04:16.126 "message": "No such device" 00:04:16.126 } 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.126 [2024-11-18 10:32:41.594374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.126 10:32:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.126 { 00:04:16.126 "subsystems": [ 00:04:16.126 { 00:04:16.126 "subsystem": "fsdev", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.126 "method": "fsdev_set_opts", 00:04:16.126 "params": { 00:04:16.126 "fsdev_io_pool_size": 65535, 00:04:16.126 "fsdev_io_cache_size": 256 00:04:16.126 } 00:04:16.126 } 00:04:16.126 ] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "keyring", 00:04:16.126 "config": [] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "iobuf", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.126 "method": "iobuf_set_options", 00:04:16.126 "params": { 00:04:16.126 "small_pool_count": 8192, 00:04:16.126 "large_pool_count": 1024, 00:04:16.126 "small_bufsize": 8192, 00:04:16.126 "large_bufsize": 135168, 00:04:16.126 "enable_numa": false 00:04:16.126 } 00:04:16.126 } 00:04:16.126 ] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "sock", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.126 "method": "sock_set_default_impl", 00:04:16.126 "params": { 00:04:16.126 "impl_name": "posix" 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "sock_impl_set_options", 00:04:16.126 "params": { 00:04:16.126 "impl_name": "ssl", 00:04:16.126 "recv_buf_size": 4096, 00:04:16.126 "send_buf_size": 4096, 00:04:16.126 "enable_recv_pipe": true, 00:04:16.126 "enable_quickack": false, 00:04:16.126 "enable_placement_id": 0, 00:04:16.126 "enable_zerocopy_send_server": true, 00:04:16.126 "enable_zerocopy_send_client": false, 00:04:16.126 "zerocopy_threshold": 0, 00:04:16.126 "tls_version": 0, 00:04:16.126 "enable_ktls": false 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "sock_impl_set_options", 00:04:16.126 "params": { 00:04:16.126 "impl_name": "posix", 00:04:16.126 "recv_buf_size": 2097152, 00:04:16.126 "send_buf_size": 2097152, 00:04:16.126 "enable_recv_pipe": true, 00:04:16.126 "enable_quickack": false, 00:04:16.126 "enable_placement_id": 0, 00:04:16.126 "enable_zerocopy_send_server": true, 00:04:16.126 "enable_zerocopy_send_client": false, 00:04:16.126 "zerocopy_threshold": 0, 00:04:16.126 "tls_version": 0, 00:04:16.126 "enable_ktls": false 00:04:16.126 } 00:04:16.126 } 00:04:16.126 ] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "vmd", 00:04:16.126 "config": [] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "accel", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.126 "method": "accel_set_options", 00:04:16.126 "params": { 00:04:16.126 "small_cache_size": 128, 00:04:16.126 "large_cache_size": 16, 00:04:16.126 "task_count": 2048, 00:04:16.126 "sequence_count": 2048, 00:04:16.126 "buf_count": 2048 00:04:16.126 } 00:04:16.126 } 00:04:16.126 ] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "bdev", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.126 "method": "bdev_set_options", 00:04:16.126 "params": { 00:04:16.126 "bdev_io_pool_size": 65535, 00:04:16.126 "bdev_io_cache_size": 256, 00:04:16.126 "bdev_auto_examine": true, 00:04:16.126 "iobuf_small_cache_size": 128, 00:04:16.126 "iobuf_large_cache_size": 16 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "bdev_raid_set_options", 00:04:16.126 "params": { 00:04:16.126 "process_window_size_kb": 1024, 00:04:16.126 "process_max_bandwidth_mb_sec": 0 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "bdev_iscsi_set_options", 00:04:16.126 "params": { 00:04:16.126 "timeout_sec": 30 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "bdev_nvme_set_options", 00:04:16.126 "params": { 00:04:16.126 "action_on_timeout": "none", 00:04:16.126 "timeout_us": 0, 00:04:16.126 "timeout_admin_us": 0, 00:04:16.126 "keep_alive_timeout_ms": 10000, 00:04:16.126 "arbitration_burst": 0, 00:04:16.126 "low_priority_weight": 0, 00:04:16.126 "medium_priority_weight": 0, 00:04:16.126 "high_priority_weight": 0, 00:04:16.126 "nvme_adminq_poll_period_us": 10000, 00:04:16.126 "nvme_ioq_poll_period_us": 0, 00:04:16.126 "io_queue_requests": 0, 00:04:16.126 "delay_cmd_submit": true, 00:04:16.126 "transport_retry_count": 4, 00:04:16.126 "bdev_retry_count": 3, 00:04:16.126 "transport_ack_timeout": 0, 00:04:16.126 "ctrlr_loss_timeout_sec": 0, 00:04:16.126 "reconnect_delay_sec": 0, 00:04:16.126 "fast_io_fail_timeout_sec": 0, 00:04:16.126 "disable_auto_failback": false, 00:04:16.126 "generate_uuids": false, 00:04:16.126 "transport_tos": 0, 00:04:16.126 "nvme_error_stat": false, 00:04:16.126 "rdma_srq_size": 0, 00:04:16.126 "io_path_stat": false, 00:04:16.126 "allow_accel_sequence": false, 00:04:16.126 "rdma_max_cq_size": 0, 00:04:16.126 "rdma_cm_event_timeout_ms": 0, 00:04:16.126 "dhchap_digests": [ 00:04:16.126 "sha256", 00:04:16.126 "sha384", 00:04:16.126 "sha512" 00:04:16.126 ], 00:04:16.126 "dhchap_dhgroups": [ 00:04:16.126 "null", 00:04:16.126 "ffdhe2048", 00:04:16.126 "ffdhe3072", 00:04:16.126 "ffdhe4096", 00:04:16.126 "ffdhe6144", 00:04:16.126 "ffdhe8192" 00:04:16.126 ] 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "bdev_nvme_set_hotplug", 00:04:16.126 "params": { 00:04:16.126 "period_us": 100000, 00:04:16.126 "enable": false 00:04:16.126 } 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "method": "bdev_wait_for_examine" 00:04:16.126 } 00:04:16.126 ] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "scsi", 00:04:16.126 "config": null 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "scheduler", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.126 "method": "framework_set_scheduler", 00:04:16.126 "params": { 00:04:16.126 "name": "static" 00:04:16.126 } 00:04:16.126 } 00:04:16.126 ] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "vhost_scsi", 00:04:16.126 "config": [] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "vhost_blk", 00:04:16.126 "config": [] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "ublk", 00:04:16.126 "config": [] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "nbd", 00:04:16.126 "config": [] 00:04:16.126 }, 00:04:16.126 { 00:04:16.126 "subsystem": "nvmf", 00:04:16.126 "config": [ 00:04:16.126 { 00:04:16.127 "method": "nvmf_set_config", 00:04:16.127 "params": { 00:04:16.127 "discovery_filter": "match_any", 00:04:16.127 "admin_cmd_passthru": { 00:04:16.127 "identify_ctrlr": false 00:04:16.127 }, 00:04:16.127 "dhchap_digests": [ 00:04:16.127 "sha256", 00:04:16.127 "sha384", 00:04:16.127 "sha512" 00:04:16.127 ], 00:04:16.127 "dhchap_dhgroups": [ 00:04:16.127 "null", 00:04:16.127 "ffdhe2048", 00:04:16.127 "ffdhe3072", 00:04:16.127 "ffdhe4096", 00:04:16.127 "ffdhe6144", 00:04:16.127 "ffdhe8192" 00:04:16.127 ] 00:04:16.127 } 00:04:16.127 }, 00:04:16.127 { 00:04:16.127 "method": "nvmf_set_max_subsystems", 00:04:16.127 "params": { 00:04:16.127 "max_subsystems": 1024 00:04:16.127 } 00:04:16.127 }, 00:04:16.127 { 00:04:16.127 "method": "nvmf_set_crdt", 00:04:16.127 "params": { 00:04:16.127 "crdt1": 0, 00:04:16.127 "crdt2": 0, 00:04:16.127 "crdt3": 0 00:04:16.127 } 00:04:16.127 }, 00:04:16.127 { 00:04:16.127 "method": "nvmf_create_transport", 00:04:16.127 "params": { 00:04:16.127 "trtype": "TCP", 00:04:16.127 "max_queue_depth": 128, 00:04:16.127 "max_io_qpairs_per_ctrlr": 127, 00:04:16.127 "in_capsule_data_size": 4096, 00:04:16.127 "max_io_size": 131072, 00:04:16.127 "io_unit_size": 131072, 00:04:16.127 "max_aq_depth": 128, 00:04:16.127 "num_shared_buffers": 511, 00:04:16.127 "buf_cache_size": 4294967295, 00:04:16.127 "dif_insert_or_strip": false, 00:04:16.127 "zcopy": false, 00:04:16.127 "c2h_success": true, 00:04:16.127 "sock_priority": 0, 00:04:16.127 "abort_timeout_sec": 1, 00:04:16.127 "ack_timeout": 0, 00:04:16.127 "data_wr_pool_size": 0 00:04:16.127 } 00:04:16.127 } 00:04:16.127 ] 00:04:16.127 }, 00:04:16.127 { 00:04:16.127 "subsystem": "iscsi", 00:04:16.127 "config": [ 00:04:16.127 { 00:04:16.127 "method": "iscsi_set_options", 00:04:16.127 "params": { 00:04:16.127 "node_base": "iqn.2016-06.io.spdk", 00:04:16.127 "max_sessions": 128, 00:04:16.127 "max_connections_per_session": 2, 00:04:16.127 "max_queue_depth": 64, 00:04:16.127 "default_time2wait": 2, 00:04:16.127 "default_time2retain": 20, 00:04:16.127 "first_burst_length": 8192, 00:04:16.127 "immediate_data": true, 00:04:16.127 "allow_duplicated_isid": false, 00:04:16.127 "error_recovery_level": 0, 00:04:16.127 "nop_timeout": 60, 00:04:16.127 "nop_in_interval": 30, 00:04:16.127 "disable_chap": false, 00:04:16.127 "require_chap": false, 00:04:16.127 "mutual_chap": false, 00:04:16.127 "chap_group": 0, 00:04:16.127 "max_large_datain_per_connection": 64, 00:04:16.127 "max_r2t_per_connection": 4, 00:04:16.127 "pdu_pool_size": 36864, 00:04:16.127 "immediate_data_pool_size": 16384, 00:04:16.127 "data_out_pool_size": 2048 00:04:16.127 } 00:04:16.127 } 00:04:16.127 ] 00:04:16.127 } 00:04:16.127 ] 00:04:16.127 } 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57121 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57121 ']' 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57121 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57121 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.127 killing process with pid 57121 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57121' 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57121 00:04:16.127 10:32:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57121 00:04:18.667 10:32:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57177 00:04:18.667 10:32:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.667 10:32:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57177 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57177 ']' 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57177 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57177 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.958 killing process with pid 57177 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57177' 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57177 00:04:23.958 10:32:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57177 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.501 00:04:26.501 real 0m11.667s 00:04:26.501 user 0m10.838s 00:04:26.501 sys 0m1.185s 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.501 ************************************ 00:04:26.501 END TEST skip_rpc_with_json 00:04:26.501 ************************************ 00:04:26.501 10:32:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.501 10:32:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.501 10:32:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.501 10:32:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.501 ************************************ 00:04:26.501 START TEST skip_rpc_with_delay 00:04:26.501 ************************************ 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.501 10:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.501 [2024-11-18 10:32:51.982670] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.501 10:32:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:26.501 10:32:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.501 ************************************ 00:04:26.501 END TEST skip_rpc_with_delay 00:04:26.501 ************************************ 00:04:26.501 10:32:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.501 10:32:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.501 00:04:26.501 real 0m0.170s 00:04:26.501 user 0m0.092s 00:04:26.501 sys 0m0.075s 00:04:26.501 10:32:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.501 10:32:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.501 10:32:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.501 10:32:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.501 10:32:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.501 10:32:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.501 10:32:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.501 10:32:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.501 ************************************ 00:04:26.501 START TEST exit_on_failed_rpc_init 00:04:26.501 ************************************ 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57316 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57316 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57316 ']' 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.501 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.502 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.502 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.502 10:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.502 [2024-11-18 10:32:52.226893] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:26.502 [2024-11-18 10:32:52.227039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57316 ] 00:04:26.769 [2024-11-18 10:32:52.403153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.769 [2024-11-18 10:32:52.540261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:27.717 10:32:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.977 [2024-11-18 10:32:53.624019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:27.977 [2024-11-18 10:32:53.624147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57334 ] 00:04:27.977 [2024-11-18 10:32:53.795743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.237 [2024-11-18 10:32:53.903281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.237 [2024-11-18 10:32:53.903410] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:28.237 [2024-11-18 10:32:53.903425] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:28.237 [2024-11-18 10:32:53.903437] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57316 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57316 ']' 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57316 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57316 00:04:28.498 killing process with pid 57316 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57316' 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57316 00:04:28.498 10:32:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57316 00:04:31.039 ************************************ 00:04:31.039 END TEST exit_on_failed_rpc_init 00:04:31.039 ************************************ 00:04:31.039 00:04:31.039 real 0m4.535s 00:04:31.039 user 0m4.682s 00:04:31.039 sys 0m0.709s 00:04:31.039 10:32:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.039 10:32:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.039 10:32:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.039 00:04:31.039 real 0m24.443s 00:04:31.039 user 0m22.759s 00:04:31.039 sys 0m2.837s 00:04:31.039 10:32:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.039 10:32:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.039 ************************************ 00:04:31.039 END TEST skip_rpc 00:04:31.039 ************************************ 00:04:31.039 10:32:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:31.039 10:32:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.039 10:32:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.039 10:32:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.039 ************************************ 00:04:31.039 START TEST rpc_client 00:04:31.039 ************************************ 00:04:31.039 10:32:56 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:31.039 * Looking for test storage... 00:04:31.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:31.039 10:32:56 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.039 10:32:56 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.039 10:32:56 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.299 10:32:56 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.299 10:32:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:31.300 10:32:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:31.300 10:32:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.300 10:32:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:31.300 10:32:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.300 10:32:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.300 10:32:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.300 10:32:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.300 --rc genhtml_branch_coverage=1 00:04:31.300 --rc genhtml_function_coverage=1 00:04:31.300 --rc genhtml_legend=1 00:04:31.300 --rc geninfo_all_blocks=1 00:04:31.300 --rc geninfo_unexecuted_blocks=1 00:04:31.300 00:04:31.300 ' 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.300 --rc genhtml_branch_coverage=1 00:04:31.300 --rc genhtml_function_coverage=1 00:04:31.300 --rc genhtml_legend=1 00:04:31.300 --rc geninfo_all_blocks=1 00:04:31.300 --rc geninfo_unexecuted_blocks=1 00:04:31.300 00:04:31.300 ' 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.300 --rc genhtml_branch_coverage=1 00:04:31.300 --rc genhtml_function_coverage=1 00:04:31.300 --rc genhtml_legend=1 00:04:31.300 --rc geninfo_all_blocks=1 00:04:31.300 --rc geninfo_unexecuted_blocks=1 00:04:31.300 00:04:31.300 ' 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.300 --rc genhtml_branch_coverage=1 00:04:31.300 --rc genhtml_function_coverage=1 00:04:31.300 --rc genhtml_legend=1 00:04:31.300 --rc geninfo_all_blocks=1 00:04:31.300 --rc geninfo_unexecuted_blocks=1 00:04:31.300 00:04:31.300 ' 00:04:31.300 10:32:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:31.300 OK 00:04:31.300 10:32:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:31.300 00:04:31.300 real 0m0.302s 00:04:31.300 user 0m0.170s 00:04:31.300 sys 0m0.150s 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.300 10:32:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 ************************************ 00:04:31.300 END TEST rpc_client 00:04:31.300 ************************************ 00:04:31.300 10:32:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:31.300 10:32:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.300 10:32:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.300 10:32:57 -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 ************************************ 00:04:31.300 START TEST json_config 00:04:31.300 ************************************ 00:04:31.300 10:32:57 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.561 10:32:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.561 10:32:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.561 10:32:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.561 10:32:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.561 10:32:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.561 10:32:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:31.561 10:32:57 json_config -- scripts/common.sh@345 -- # : 1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.561 10:32:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.561 10:32:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@353 -- # local d=1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.561 10:32:57 json_config -- scripts/common.sh@355 -- # echo 1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.561 10:32:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@353 -- # local d=2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.561 10:32:57 json_config -- scripts/common.sh@355 -- # echo 2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.561 10:32:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.561 10:32:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.561 10:32:57 json_config -- scripts/common.sh@368 -- # return 0 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 10:32:57 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.561 --rc genhtml_branch_coverage=1 00:04:31.561 --rc genhtml_function_coverage=1 00:04:31.561 --rc genhtml_legend=1 00:04:31.561 --rc geninfo_all_blocks=1 00:04:31.561 --rc geninfo_unexecuted_blocks=1 00:04:31.561 00:04:31.561 ' 00:04:31.561 10:32:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:83f02efc-e39e-4041-b990-41110c7eb81d 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=83f02efc-e39e-4041-b990-41110c7eb81d 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.561 10:32:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:31.561 10:32:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.561 10:32:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.561 10:32:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.562 10:32:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.562 10:32:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.562 10:32:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.562 10:32:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.562 10:32:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:31.562 10:32:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@51 -- # : 0 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.562 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.562 10:32:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:31.562 WARNING: No tests are enabled so not running JSON configuration tests 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:31.562 10:32:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:31.562 00:04:31.562 real 0m0.232s 00:04:31.562 user 0m0.147s 00:04:31.562 sys 0m0.094s 00:04:31.562 10:32:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.562 10:32:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.562 ************************************ 00:04:31.562 END TEST json_config 00:04:31.562 ************************************ 00:04:31.562 10:32:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:31.562 10:32:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.562 10:32:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.562 10:32:57 -- common/autotest_common.sh@10 -- # set +x 00:04:31.823 ************************************ 00:04:31.823 START TEST json_config_extra_key 00:04:31.823 ************************************ 00:04:31.823 10:32:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:31.823 10:32:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.823 10:32:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.823 10:32:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.823 10:32:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.823 10:32:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.824 --rc genhtml_branch_coverage=1 00:04:31.824 --rc genhtml_function_coverage=1 00:04:31.824 --rc genhtml_legend=1 00:04:31.824 --rc geninfo_all_blocks=1 00:04:31.824 --rc geninfo_unexecuted_blocks=1 00:04:31.824 00:04:31.824 ' 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.824 --rc genhtml_branch_coverage=1 00:04:31.824 --rc genhtml_function_coverage=1 00:04:31.824 --rc genhtml_legend=1 00:04:31.824 --rc geninfo_all_blocks=1 00:04:31.824 --rc geninfo_unexecuted_blocks=1 00:04:31.824 00:04:31.824 ' 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.824 --rc genhtml_branch_coverage=1 00:04:31.824 --rc genhtml_function_coverage=1 00:04:31.824 --rc genhtml_legend=1 00:04:31.824 --rc geninfo_all_blocks=1 00:04:31.824 --rc geninfo_unexecuted_blocks=1 00:04:31.824 00:04:31.824 ' 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.824 --rc genhtml_branch_coverage=1 00:04:31.824 --rc genhtml_function_coverage=1 00:04:31.824 --rc genhtml_legend=1 00:04:31.824 --rc geninfo_all_blocks=1 00:04:31.824 --rc geninfo_unexecuted_blocks=1 00:04:31.824 00:04:31.824 ' 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:83f02efc-e39e-4041-b990-41110c7eb81d 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=83f02efc-e39e-4041-b990-41110c7eb81d 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:31.824 10:32:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.824 10:32:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.824 10:32:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.824 10:32:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.824 10:32:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.824 10:32:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.824 10:32:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.824 10:32:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:31.824 10:32:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.824 10:32:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:31.824 INFO: launching applications... 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:31.824 10:32:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57544 00:04:31.824 Waiting for target to run... 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57544 /var/tmp/spdk_tgt.sock 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57544 ']' 00:04:31.824 10:32:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.824 10:32:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.085 [2024-11-18 10:32:57.767451] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:32.085 [2024-11-18 10:32:57.767590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57544 ] 00:04:32.345 [2024-11-18 10:32:58.157667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.605 [2024-11-18 10:32:58.270775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.176 10:32:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.176 10:32:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:33.176 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.176 INFO: shutting down applications... 00:04:33.176 10:32:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.176 10:32:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57544 ]] 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57544 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:33.176 10:32:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.757 10:32:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.757 10:32:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.757 10:32:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:33.757 10:32:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.325 10:32:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.325 10:32:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.325 10:32:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:34.325 10:32:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.896 10:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.896 10:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.896 10:33:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:34.896 10:33:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.156 10:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.156 10:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.156 10:33:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:35.156 10:33:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.725 10:33:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.725 10:33:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.725 10:33:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:35.725 10:33:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57544 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.295 SPDK target shutdown done 00:04:36.295 10:33:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.295 Success 00:04:36.295 10:33:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:36.295 00:04:36.295 real 0m4.545s 00:04:36.295 user 0m4.058s 00:04:36.295 sys 0m0.608s 00:04:36.295 10:33:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.295 10:33:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.295 ************************************ 00:04:36.295 END TEST json_config_extra_key 00:04:36.295 ************************************ 00:04:36.295 10:33:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.295 10:33:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.295 10:33:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.295 10:33:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.295 ************************************ 00:04:36.295 START TEST alias_rpc 00:04:36.295 ************************************ 00:04:36.295 10:33:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.295 * Looking for test storage... 00:04:36.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.556 10:33:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.556 --rc genhtml_branch_coverage=1 00:04:36.556 --rc genhtml_function_coverage=1 00:04:36.556 --rc genhtml_legend=1 00:04:36.556 --rc geninfo_all_blocks=1 00:04:36.556 --rc geninfo_unexecuted_blocks=1 00:04:36.556 00:04:36.556 ' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.556 --rc genhtml_branch_coverage=1 00:04:36.556 --rc genhtml_function_coverage=1 00:04:36.556 --rc genhtml_legend=1 00:04:36.556 --rc geninfo_all_blocks=1 00:04:36.556 --rc geninfo_unexecuted_blocks=1 00:04:36.556 00:04:36.556 ' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.556 --rc genhtml_branch_coverage=1 00:04:36.556 --rc genhtml_function_coverage=1 00:04:36.556 --rc genhtml_legend=1 00:04:36.556 --rc geninfo_all_blocks=1 00:04:36.556 --rc geninfo_unexecuted_blocks=1 00:04:36.556 00:04:36.556 ' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.556 --rc genhtml_branch_coverage=1 00:04:36.556 --rc genhtml_function_coverage=1 00:04:36.556 --rc genhtml_legend=1 00:04:36.556 --rc geninfo_all_blocks=1 00:04:36.556 --rc geninfo_unexecuted_blocks=1 00:04:36.556 00:04:36.556 ' 00:04:36.556 10:33:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:36.556 10:33:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57656 00:04:36.556 10:33:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.556 10:33:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57656 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57656 ']' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.556 10:33:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.557 [2024-11-18 10:33:02.385542] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:36.557 [2024-11-18 10:33:02.385658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57656 ] 00:04:36.816 [2024-11-18 10:33:02.563291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.816 [2024-11-18 10:33:02.696360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.200 10:33:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:38.200 10:33:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57656 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57656 ']' 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57656 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57656 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.200 killing process with pid 57656 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57656' 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 57656 00:04:38.200 10:33:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 57656 00:04:40.749 00:04:40.749 real 0m4.376s 00:04:40.749 user 0m4.179s 00:04:40.749 sys 0m0.757s 00:04:40.749 10:33:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.749 10:33:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.749 ************************************ 00:04:40.749 END TEST alias_rpc 00:04:40.749 ************************************ 00:04:40.749 10:33:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:40.749 10:33:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.749 10:33:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.749 10:33:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.749 10:33:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.749 ************************************ 00:04:40.749 START TEST spdkcli_tcp 00:04:40.749 ************************************ 00:04:40.749 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.749 * Looking for test storage... 00:04:41.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.022 10:33:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.022 --rc genhtml_branch_coverage=1 00:04:41.022 --rc genhtml_function_coverage=1 00:04:41.022 --rc genhtml_legend=1 00:04:41.022 --rc geninfo_all_blocks=1 00:04:41.022 --rc geninfo_unexecuted_blocks=1 00:04:41.022 00:04:41.022 ' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.022 --rc genhtml_branch_coverage=1 00:04:41.022 --rc genhtml_function_coverage=1 00:04:41.022 --rc genhtml_legend=1 00:04:41.022 --rc geninfo_all_blocks=1 00:04:41.022 --rc geninfo_unexecuted_blocks=1 00:04:41.022 00:04:41.022 ' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.022 --rc genhtml_branch_coverage=1 00:04:41.022 --rc genhtml_function_coverage=1 00:04:41.022 --rc genhtml_legend=1 00:04:41.022 --rc geninfo_all_blocks=1 00:04:41.022 --rc geninfo_unexecuted_blocks=1 00:04:41.022 00:04:41.022 ' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.022 --rc genhtml_branch_coverage=1 00:04:41.022 --rc genhtml_function_coverage=1 00:04:41.022 --rc genhtml_legend=1 00:04:41.022 --rc geninfo_all_blocks=1 00:04:41.022 --rc geninfo_unexecuted_blocks=1 00:04:41.022 00:04:41.022 ' 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57768 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:41.022 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57768 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57768 ']' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.022 10:33:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.022 [2024-11-18 10:33:06.857636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:41.022 [2024-11-18 10:33:06.857782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57768 ] 00:04:41.282 [2024-11-18 10:33:07.038239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.542 [2024-11-18 10:33:07.177042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.542 [2024-11-18 10:33:07.177081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.482 10:33:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.482 10:33:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:42.482 10:33:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:42.482 10:33:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57791 00:04:42.482 10:33:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:42.742 [ 00:04:42.742 "bdev_malloc_delete", 00:04:42.742 "bdev_malloc_create", 00:04:42.742 "bdev_null_resize", 00:04:42.742 "bdev_null_delete", 00:04:42.742 "bdev_null_create", 00:04:42.742 "bdev_nvme_cuse_unregister", 00:04:42.742 "bdev_nvme_cuse_register", 00:04:42.742 "bdev_opal_new_user", 00:04:42.742 "bdev_opal_set_lock_state", 00:04:42.742 "bdev_opal_delete", 00:04:42.742 "bdev_opal_get_info", 00:04:42.742 "bdev_opal_create", 00:04:42.742 "bdev_nvme_opal_revert", 00:04:42.742 "bdev_nvme_opal_init", 00:04:42.742 "bdev_nvme_send_cmd", 00:04:42.742 "bdev_nvme_set_keys", 00:04:42.742 "bdev_nvme_get_path_iostat", 00:04:42.742 "bdev_nvme_get_mdns_discovery_info", 00:04:42.742 "bdev_nvme_stop_mdns_discovery", 00:04:42.742 "bdev_nvme_start_mdns_discovery", 00:04:42.742 "bdev_nvme_set_multipath_policy", 00:04:42.742 "bdev_nvme_set_preferred_path", 00:04:42.742 "bdev_nvme_get_io_paths", 00:04:42.742 "bdev_nvme_remove_error_injection", 00:04:42.742 "bdev_nvme_add_error_injection", 00:04:42.742 "bdev_nvme_get_discovery_info", 00:04:42.742 "bdev_nvme_stop_discovery", 00:04:42.742 "bdev_nvme_start_discovery", 00:04:42.742 "bdev_nvme_get_controller_health_info", 00:04:42.742 "bdev_nvme_disable_controller", 00:04:42.742 "bdev_nvme_enable_controller", 00:04:42.742 "bdev_nvme_reset_controller", 00:04:42.742 "bdev_nvme_get_transport_statistics", 00:04:42.742 "bdev_nvme_apply_firmware", 00:04:42.742 "bdev_nvme_detach_controller", 00:04:42.742 "bdev_nvme_get_controllers", 00:04:42.742 "bdev_nvme_attach_controller", 00:04:42.742 "bdev_nvme_set_hotplug", 00:04:42.742 "bdev_nvme_set_options", 00:04:42.742 "bdev_passthru_delete", 00:04:42.742 "bdev_passthru_create", 00:04:42.742 "bdev_lvol_set_parent_bdev", 00:04:42.742 "bdev_lvol_set_parent", 00:04:42.742 "bdev_lvol_check_shallow_copy", 00:04:42.742 "bdev_lvol_start_shallow_copy", 00:04:42.742 "bdev_lvol_grow_lvstore", 00:04:42.742 "bdev_lvol_get_lvols", 00:04:42.742 "bdev_lvol_get_lvstores", 00:04:42.742 "bdev_lvol_delete", 00:04:42.742 "bdev_lvol_set_read_only", 00:04:42.742 "bdev_lvol_resize", 00:04:42.742 "bdev_lvol_decouple_parent", 00:04:42.742 "bdev_lvol_inflate", 00:04:42.742 "bdev_lvol_rename", 00:04:42.742 "bdev_lvol_clone_bdev", 00:04:42.742 "bdev_lvol_clone", 00:04:42.742 "bdev_lvol_snapshot", 00:04:42.742 "bdev_lvol_create", 00:04:42.742 "bdev_lvol_delete_lvstore", 00:04:42.742 "bdev_lvol_rename_lvstore", 00:04:42.742 "bdev_lvol_create_lvstore", 00:04:42.742 "bdev_raid_set_options", 00:04:42.742 "bdev_raid_remove_base_bdev", 00:04:42.742 "bdev_raid_add_base_bdev", 00:04:42.742 "bdev_raid_delete", 00:04:42.742 "bdev_raid_create", 00:04:42.742 "bdev_raid_get_bdevs", 00:04:42.742 "bdev_error_inject_error", 00:04:42.742 "bdev_error_delete", 00:04:42.742 "bdev_error_create", 00:04:42.742 "bdev_split_delete", 00:04:42.742 "bdev_split_create", 00:04:42.742 "bdev_delay_delete", 00:04:42.742 "bdev_delay_create", 00:04:42.742 "bdev_delay_update_latency", 00:04:42.742 "bdev_zone_block_delete", 00:04:42.742 "bdev_zone_block_create", 00:04:42.742 "blobfs_create", 00:04:42.742 "blobfs_detect", 00:04:42.742 "blobfs_set_cache_size", 00:04:42.742 "bdev_aio_delete", 00:04:42.742 "bdev_aio_rescan", 00:04:42.742 "bdev_aio_create", 00:04:42.742 "bdev_ftl_set_property", 00:04:42.742 "bdev_ftl_get_properties", 00:04:42.742 "bdev_ftl_get_stats", 00:04:42.742 "bdev_ftl_unmap", 00:04:42.742 "bdev_ftl_unload", 00:04:42.742 "bdev_ftl_delete", 00:04:42.742 "bdev_ftl_load", 00:04:42.742 "bdev_ftl_create", 00:04:42.742 "bdev_virtio_attach_controller", 00:04:42.742 "bdev_virtio_scsi_get_devices", 00:04:42.742 "bdev_virtio_detach_controller", 00:04:42.742 "bdev_virtio_blk_set_hotplug", 00:04:42.742 "bdev_iscsi_delete", 00:04:42.742 "bdev_iscsi_create", 00:04:42.742 "bdev_iscsi_set_options", 00:04:42.742 "accel_error_inject_error", 00:04:42.742 "ioat_scan_accel_module", 00:04:42.742 "dsa_scan_accel_module", 00:04:42.742 "iaa_scan_accel_module", 00:04:42.743 "keyring_file_remove_key", 00:04:42.743 "keyring_file_add_key", 00:04:42.743 "keyring_linux_set_options", 00:04:42.743 "fsdev_aio_delete", 00:04:42.743 "fsdev_aio_create", 00:04:42.743 "iscsi_get_histogram", 00:04:42.743 "iscsi_enable_histogram", 00:04:42.743 "iscsi_set_options", 00:04:42.743 "iscsi_get_auth_groups", 00:04:42.743 "iscsi_auth_group_remove_secret", 00:04:42.743 "iscsi_auth_group_add_secret", 00:04:42.743 "iscsi_delete_auth_group", 00:04:42.743 "iscsi_create_auth_group", 00:04:42.743 "iscsi_set_discovery_auth", 00:04:42.743 "iscsi_get_options", 00:04:42.743 "iscsi_target_node_request_logout", 00:04:42.743 "iscsi_target_node_set_redirect", 00:04:42.743 "iscsi_target_node_set_auth", 00:04:42.743 "iscsi_target_node_add_lun", 00:04:42.743 "iscsi_get_stats", 00:04:42.743 "iscsi_get_connections", 00:04:42.743 "iscsi_portal_group_set_auth", 00:04:42.743 "iscsi_start_portal_group", 00:04:42.743 "iscsi_delete_portal_group", 00:04:42.743 "iscsi_create_portal_group", 00:04:42.743 "iscsi_get_portal_groups", 00:04:42.743 "iscsi_delete_target_node", 00:04:42.743 "iscsi_target_node_remove_pg_ig_maps", 00:04:42.743 "iscsi_target_node_add_pg_ig_maps", 00:04:42.743 "iscsi_create_target_node", 00:04:42.743 "iscsi_get_target_nodes", 00:04:42.743 "iscsi_delete_initiator_group", 00:04:42.743 "iscsi_initiator_group_remove_initiators", 00:04:42.743 "iscsi_initiator_group_add_initiators", 00:04:42.743 "iscsi_create_initiator_group", 00:04:42.743 "iscsi_get_initiator_groups", 00:04:42.743 "nvmf_set_crdt", 00:04:42.743 "nvmf_set_config", 00:04:42.743 "nvmf_set_max_subsystems", 00:04:42.743 "nvmf_stop_mdns_prr", 00:04:42.743 "nvmf_publish_mdns_prr", 00:04:42.743 "nvmf_subsystem_get_listeners", 00:04:42.743 "nvmf_subsystem_get_qpairs", 00:04:42.743 "nvmf_subsystem_get_controllers", 00:04:42.743 "nvmf_get_stats", 00:04:42.743 "nvmf_get_transports", 00:04:42.743 "nvmf_create_transport", 00:04:42.743 "nvmf_get_targets", 00:04:42.743 "nvmf_delete_target", 00:04:42.743 "nvmf_create_target", 00:04:42.743 "nvmf_subsystem_allow_any_host", 00:04:42.743 "nvmf_subsystem_set_keys", 00:04:42.743 "nvmf_subsystem_remove_host", 00:04:42.743 "nvmf_subsystem_add_host", 00:04:42.743 "nvmf_ns_remove_host", 00:04:42.743 "nvmf_ns_add_host", 00:04:42.743 "nvmf_subsystem_remove_ns", 00:04:42.743 "nvmf_subsystem_set_ns_ana_group", 00:04:42.743 "nvmf_subsystem_add_ns", 00:04:42.743 "nvmf_subsystem_listener_set_ana_state", 00:04:42.743 "nvmf_discovery_get_referrals", 00:04:42.743 "nvmf_discovery_remove_referral", 00:04:42.743 "nvmf_discovery_add_referral", 00:04:42.743 "nvmf_subsystem_remove_listener", 00:04:42.743 "nvmf_subsystem_add_listener", 00:04:42.743 "nvmf_delete_subsystem", 00:04:42.743 "nvmf_create_subsystem", 00:04:42.743 "nvmf_get_subsystems", 00:04:42.743 "env_dpdk_get_mem_stats", 00:04:42.743 "nbd_get_disks", 00:04:42.743 "nbd_stop_disk", 00:04:42.743 "nbd_start_disk", 00:04:42.743 "ublk_recover_disk", 00:04:42.743 "ublk_get_disks", 00:04:42.743 "ublk_stop_disk", 00:04:42.743 "ublk_start_disk", 00:04:42.743 "ublk_destroy_target", 00:04:42.743 "ublk_create_target", 00:04:42.743 "virtio_blk_create_transport", 00:04:42.743 "virtio_blk_get_transports", 00:04:42.743 "vhost_controller_set_coalescing", 00:04:42.743 "vhost_get_controllers", 00:04:42.743 "vhost_delete_controller", 00:04:42.743 "vhost_create_blk_controller", 00:04:42.743 "vhost_scsi_controller_remove_target", 00:04:42.743 "vhost_scsi_controller_add_target", 00:04:42.743 "vhost_start_scsi_controller", 00:04:42.743 "vhost_create_scsi_controller", 00:04:42.743 "thread_set_cpumask", 00:04:42.743 "scheduler_set_options", 00:04:42.743 "framework_get_governor", 00:04:42.743 "framework_get_scheduler", 00:04:42.743 "framework_set_scheduler", 00:04:42.743 "framework_get_reactors", 00:04:42.743 "thread_get_io_channels", 00:04:42.743 "thread_get_pollers", 00:04:42.743 "thread_get_stats", 00:04:42.743 "framework_monitor_context_switch", 00:04:42.743 "spdk_kill_instance", 00:04:42.743 "log_enable_timestamps", 00:04:42.743 "log_get_flags", 00:04:42.743 "log_clear_flag", 00:04:42.743 "log_set_flag", 00:04:42.743 "log_get_level", 00:04:42.743 "log_set_level", 00:04:42.743 "log_get_print_level", 00:04:42.743 "log_set_print_level", 00:04:42.743 "framework_enable_cpumask_locks", 00:04:42.743 "framework_disable_cpumask_locks", 00:04:42.743 "framework_wait_init", 00:04:42.743 "framework_start_init", 00:04:42.743 "scsi_get_devices", 00:04:42.743 "bdev_get_histogram", 00:04:42.743 "bdev_enable_histogram", 00:04:42.743 "bdev_set_qos_limit", 00:04:42.743 "bdev_set_qd_sampling_period", 00:04:42.743 "bdev_get_bdevs", 00:04:42.743 "bdev_reset_iostat", 00:04:42.743 "bdev_get_iostat", 00:04:42.743 "bdev_examine", 00:04:42.743 "bdev_wait_for_examine", 00:04:42.743 "bdev_set_options", 00:04:42.743 "accel_get_stats", 00:04:42.743 "accel_set_options", 00:04:42.743 "accel_set_driver", 00:04:42.743 "accel_crypto_key_destroy", 00:04:42.743 "accel_crypto_keys_get", 00:04:42.743 "accel_crypto_key_create", 00:04:42.743 "accel_assign_opc", 00:04:42.743 "accel_get_module_info", 00:04:42.743 "accel_get_opc_assignments", 00:04:42.743 "vmd_rescan", 00:04:42.743 "vmd_remove_device", 00:04:42.743 "vmd_enable", 00:04:42.743 "sock_get_default_impl", 00:04:42.743 "sock_set_default_impl", 00:04:42.743 "sock_impl_set_options", 00:04:42.743 "sock_impl_get_options", 00:04:42.743 "iobuf_get_stats", 00:04:42.743 "iobuf_set_options", 00:04:42.743 "keyring_get_keys", 00:04:42.743 "framework_get_pci_devices", 00:04:42.743 "framework_get_config", 00:04:42.743 "framework_get_subsystems", 00:04:42.743 "fsdev_set_opts", 00:04:42.743 "fsdev_get_opts", 00:04:42.743 "trace_get_info", 00:04:42.743 "trace_get_tpoint_group_mask", 00:04:42.743 "trace_disable_tpoint_group", 00:04:42.743 "trace_enable_tpoint_group", 00:04:42.743 "trace_clear_tpoint_mask", 00:04:42.743 "trace_set_tpoint_mask", 00:04:42.743 "notify_get_notifications", 00:04:42.743 "notify_get_types", 00:04:42.743 "spdk_get_version", 00:04:42.743 "rpc_get_methods" 00:04:42.743 ] 00:04:42.743 10:33:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.743 10:33:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:42.743 10:33:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57768 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57768 ']' 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57768 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57768 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.743 killing process with pid 57768 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57768' 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57768 00:04:42.743 10:33:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57768 00:04:45.284 00:04:45.284 real 0m4.532s 00:04:45.284 user 0m7.989s 00:04:45.284 sys 0m0.811s 00:04:45.284 10:33:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.284 10:33:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.284 ************************************ 00:04:45.284 END TEST spdkcli_tcp 00:04:45.284 ************************************ 00:04:45.284 10:33:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.284 10:33:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.284 10:33:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.284 10:33:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.284 ************************************ 00:04:45.284 START TEST dpdk_mem_utility 00:04:45.284 ************************************ 00:04:45.284 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.544 * Looking for test storage... 00:04:45.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.544 10:33:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.544 --rc genhtml_branch_coverage=1 00:04:45.544 --rc genhtml_function_coverage=1 00:04:45.544 --rc genhtml_legend=1 00:04:45.544 --rc geninfo_all_blocks=1 00:04:45.544 --rc geninfo_unexecuted_blocks=1 00:04:45.544 00:04:45.544 ' 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.544 --rc genhtml_branch_coverage=1 00:04:45.544 --rc genhtml_function_coverage=1 00:04:45.544 --rc genhtml_legend=1 00:04:45.544 --rc geninfo_all_blocks=1 00:04:45.544 --rc geninfo_unexecuted_blocks=1 00:04:45.544 00:04:45.544 ' 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.544 --rc genhtml_branch_coverage=1 00:04:45.544 --rc genhtml_function_coverage=1 00:04:45.544 --rc genhtml_legend=1 00:04:45.544 --rc geninfo_all_blocks=1 00:04:45.544 --rc geninfo_unexecuted_blocks=1 00:04:45.544 00:04:45.544 ' 00:04:45.544 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.544 --rc genhtml_branch_coverage=1 00:04:45.544 --rc genhtml_function_coverage=1 00:04:45.544 --rc genhtml_legend=1 00:04:45.544 --rc geninfo_all_blocks=1 00:04:45.544 --rc geninfo_unexecuted_blocks=1 00:04:45.544 00:04:45.544 ' 00:04:45.544 10:33:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:45.544 10:33:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57896 00:04:45.544 10:33:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.544 10:33:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57896 00:04:45.545 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57896 ']' 00:04:45.545 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.545 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.545 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.545 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.545 10:33:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.805 [2024-11-18 10:33:11.442557] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:45.805 [2024-11-18 10:33:11.442701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57896 ] 00:04:45.805 [2024-11-18 10:33:11.622821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.065 [2024-11-18 10:33:11.749395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.006 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.006 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:47.006 10:33:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:47.006 10:33:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:47.006 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.006 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.006 { 00:04:47.006 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.006 } 00:04:47.006 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.006 10:33:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:47.006 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:47.006 1 heaps totaling size 816.000000 MiB 00:04:47.006 size: 816.000000 MiB heap id: 0 00:04:47.006 end heaps---------- 00:04:47.006 9 mempools totaling size 595.772034 MiB 00:04:47.006 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.006 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.006 size: 92.545471 MiB name: bdev_io_57896 00:04:47.006 size: 50.003479 MiB name: msgpool_57896 00:04:47.006 size: 36.509338 MiB name: fsdev_io_57896 00:04:47.006 size: 21.763794 MiB name: PDU_Pool 00:04:47.006 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.006 size: 4.133484 MiB name: evtpool_57896 00:04:47.006 size: 0.026123 MiB name: Session_Pool 00:04:47.006 end mempools------- 00:04:47.006 6 memzones totaling size 4.142822 MiB 00:04:47.006 size: 1.000366 MiB name: RG_ring_0_57896 00:04:47.006 size: 1.000366 MiB name: RG_ring_1_57896 00:04:47.006 size: 1.000366 MiB name: RG_ring_4_57896 00:04:47.006 size: 1.000366 MiB name: RG_ring_5_57896 00:04:47.006 size: 0.125366 MiB name: RG_ring_2_57896 00:04:47.006 size: 0.015991 MiB name: RG_ring_3_57896 00:04:47.006 end memzones------- 00:04:47.006 10:33:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.006 heap id: 0 total size: 816.000000 MiB number of busy elements: 322 number of free elements: 18 00:04:47.006 list of free elements. size: 16.789673 MiB 00:04:47.006 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:47.006 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:47.006 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:47.006 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:47.006 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:47.006 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:47.006 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:47.006 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:47.006 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:47.006 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:47.006 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:47.006 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:04:47.006 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:47.006 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:47.006 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:47.006 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:47.006 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:47.006 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:47.006 list of standard malloc elements. size: 199.289429 MiB 00:04:47.006 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:47.006 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:47.006 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:47.007 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:47.007 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:47.007 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:47.007 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:47.007 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:47.007 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:47.007 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:47.007 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:47.007 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:47.007 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:47.008 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:47.008 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:47.008 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:47.009 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:47.009 list of memzone associated elements. size: 599.920898 MiB 00:04:47.009 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:47.009 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.009 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:47.009 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.009 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:47.009 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57896_0 00:04:47.009 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:47.009 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57896_0 00:04:47.009 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:47.009 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57896_0 00:04:47.009 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:47.009 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.009 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:47.009 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.009 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:47.009 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57896_0 00:04:47.009 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:47.009 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57896 00:04:47.009 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:47.009 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57896 00:04:47.009 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:47.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.009 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:47.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.009 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:47.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.009 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:47.009 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.009 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:47.009 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57896 00:04:47.009 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:47.009 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57896 00:04:47.009 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:47.009 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57896 00:04:47.009 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:47.009 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57896 00:04:47.009 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:47.009 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57896 00:04:47.009 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:47.009 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57896 00:04:47.009 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:47.009 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.009 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:47.009 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.009 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:47.009 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.009 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:47.009 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57896 00:04:47.009 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:47.009 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57896 00:04:47.009 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:47.009 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.009 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:47.009 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.009 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:47.009 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57896 00:04:47.009 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:47.009 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.009 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:47.009 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57896 00:04:47.009 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:47.009 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57896 00:04:47.009 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:47.009 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57896 00:04:47.009 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:47.009 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.009 10:33:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.009 10:33:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57896 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57896 ']' 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57896 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57896 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.009 killing process with pid 57896 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57896' 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57896 00:04:47.009 10:33:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57896 00:04:49.548 00:04:49.548 real 0m4.236s 00:04:49.548 user 0m3.951s 00:04:49.548 sys 0m0.739s 00:04:49.548 10:33:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.548 10:33:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.548 ************************************ 00:04:49.548 END TEST dpdk_mem_utility 00:04:49.548 ************************************ 00:04:49.548 10:33:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:49.548 10:33:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.548 10:33:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.548 10:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.548 ************************************ 00:04:49.548 START TEST event 00:04:49.548 ************************************ 00:04:49.548 10:33:15 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:49.808 * Looking for test storage... 00:04:49.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.808 10:33:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.808 10:33:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.808 10:33:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.808 10:33:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.808 10:33:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.808 10:33:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.808 10:33:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.808 10:33:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.808 10:33:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.808 10:33:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.808 10:33:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.808 10:33:15 event -- scripts/common.sh@344 -- # case "$op" in 00:04:49.808 10:33:15 event -- scripts/common.sh@345 -- # : 1 00:04:49.808 10:33:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.808 10:33:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.808 10:33:15 event -- scripts/common.sh@365 -- # decimal 1 00:04:49.808 10:33:15 event -- scripts/common.sh@353 -- # local d=1 00:04:49.808 10:33:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.808 10:33:15 event -- scripts/common.sh@355 -- # echo 1 00:04:49.808 10:33:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.808 10:33:15 event -- scripts/common.sh@366 -- # decimal 2 00:04:49.808 10:33:15 event -- scripts/common.sh@353 -- # local d=2 00:04:49.808 10:33:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.808 10:33:15 event -- scripts/common.sh@355 -- # echo 2 00:04:49.808 10:33:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.808 10:33:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.808 10:33:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.808 10:33:15 event -- scripts/common.sh@368 -- # return 0 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.808 --rc genhtml_branch_coverage=1 00:04:49.808 --rc genhtml_function_coverage=1 00:04:49.808 --rc genhtml_legend=1 00:04:49.808 --rc geninfo_all_blocks=1 00:04:49.808 --rc geninfo_unexecuted_blocks=1 00:04:49.808 00:04:49.808 ' 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.808 --rc genhtml_branch_coverage=1 00:04:49.808 --rc genhtml_function_coverage=1 00:04:49.808 --rc genhtml_legend=1 00:04:49.808 --rc geninfo_all_blocks=1 00:04:49.808 --rc geninfo_unexecuted_blocks=1 00:04:49.808 00:04:49.808 ' 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.808 --rc genhtml_branch_coverage=1 00:04:49.808 --rc genhtml_function_coverage=1 00:04:49.808 --rc genhtml_legend=1 00:04:49.808 --rc geninfo_all_blocks=1 00:04:49.808 --rc geninfo_unexecuted_blocks=1 00:04:49.808 00:04:49.808 ' 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.808 --rc genhtml_branch_coverage=1 00:04:49.808 --rc genhtml_function_coverage=1 00:04:49.808 --rc genhtml_legend=1 00:04:49.808 --rc geninfo_all_blocks=1 00:04:49.808 --rc geninfo_unexecuted_blocks=1 00:04:49.808 00:04:49.808 ' 00:04:49.808 10:33:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:49.808 10:33:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:49.808 10:33:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:49.808 10:33:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.808 10:33:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.808 ************************************ 00:04:49.808 START TEST event_perf 00:04:49.808 ************************************ 00:04:49.808 10:33:15 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.068 Running I/O for 1 seconds...[2024-11-18 10:33:15.695693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:50.068 [2024-11-18 10:33:15.695803] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58004 ] 00:04:50.068 [2024-11-18 10:33:15.875501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.327 [2024-11-18 10:33:16.013540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.327 [2024-11-18 10:33:16.013777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.327 [2024-11-18 10:33:16.013963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.327 Running I/O for 1 seconds...[2024-11-18 10:33:16.013965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.705 00:04:51.705 lcore 0: 93444 00:04:51.705 lcore 1: 93447 00:04:51.705 lcore 2: 93444 00:04:51.705 lcore 3: 93447 00:04:51.705 done. 00:04:51.705 00:04:51.705 real 0m1.621s 00:04:51.705 user 0m4.366s 00:04:51.705 sys 0m0.134s 00:04:51.705 10:33:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.705 10:33:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.705 ************************************ 00:04:51.705 END TEST event_perf 00:04:51.705 ************************************ 00:04:51.705 10:33:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.705 10:33:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:51.705 10:33:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.705 10:33:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.705 ************************************ 00:04:51.705 START TEST event_reactor 00:04:51.705 ************************************ 00:04:51.705 10:33:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.705 [2024-11-18 10:33:17.381076] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:51.705 [2024-11-18 10:33:17.381201] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58043 ] 00:04:51.706 [2024-11-18 10:33:17.559071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.988 [2024-11-18 10:33:17.699476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.388 test_start 00:04:53.388 oneshot 00:04:53.388 tick 100 00:04:53.388 tick 100 00:04:53.388 tick 250 00:04:53.388 tick 100 00:04:53.388 tick 100 00:04:53.388 tick 100 00:04:53.388 tick 250 00:04:53.388 tick 500 00:04:53.388 tick 100 00:04:53.388 tick 100 00:04:53.388 tick 250 00:04:53.388 tick 100 00:04:53.388 tick 100 00:04:53.388 test_end 00:04:53.388 00:04:53.388 real 0m1.604s 00:04:53.388 user 0m1.378s 00:04:53.388 sys 0m0.118s 00:04:53.388 10:33:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.388 10:33:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.388 ************************************ 00:04:53.388 END TEST event_reactor 00:04:53.388 ************************************ 00:04:53.388 10:33:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.388 10:33:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:53.388 10:33:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.388 10:33:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.388 ************************************ 00:04:53.388 START TEST event_reactor_perf 00:04:53.389 ************************************ 00:04:53.389 10:33:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.389 [2024-11-18 10:33:19.050758] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:53.389 [2024-11-18 10:33:19.050881] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58080 ] 00:04:53.389 [2024-11-18 10:33:19.219146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.652 [2024-11-18 10:33:19.356823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.032 test_start 00:04:55.032 test_end 00:04:55.032 Performance: 408122 events per second 00:04:55.032 00:04:55.032 real 0m1.591s 00:04:55.032 user 0m1.371s 00:04:55.032 sys 0m0.112s 00:04:55.032 10:33:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.032 10:33:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.032 ************************************ 00:04:55.032 END TEST event_reactor_perf 00:04:55.032 ************************************ 00:04:55.032 10:33:20 event -- event/event.sh@49 -- # uname -s 00:04:55.033 10:33:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:55.033 10:33:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:55.033 10:33:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.033 10:33:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.033 10:33:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.033 ************************************ 00:04:55.033 START TEST event_scheduler 00:04:55.033 ************************************ 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:55.033 * Looking for test storage... 00:04:55.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.033 10:33:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.033 --rc genhtml_branch_coverage=1 00:04:55.033 --rc genhtml_function_coverage=1 00:04:55.033 --rc genhtml_legend=1 00:04:55.033 --rc geninfo_all_blocks=1 00:04:55.033 --rc geninfo_unexecuted_blocks=1 00:04:55.033 00:04:55.033 ' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.033 --rc genhtml_branch_coverage=1 00:04:55.033 --rc genhtml_function_coverage=1 00:04:55.033 --rc genhtml_legend=1 00:04:55.033 --rc geninfo_all_blocks=1 00:04:55.033 --rc geninfo_unexecuted_blocks=1 00:04:55.033 00:04:55.033 ' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.033 --rc genhtml_branch_coverage=1 00:04:55.033 --rc genhtml_function_coverage=1 00:04:55.033 --rc genhtml_legend=1 00:04:55.033 --rc geninfo_all_blocks=1 00:04:55.033 --rc geninfo_unexecuted_blocks=1 00:04:55.033 00:04:55.033 ' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.033 --rc genhtml_branch_coverage=1 00:04:55.033 --rc genhtml_function_coverage=1 00:04:55.033 --rc genhtml_legend=1 00:04:55.033 --rc geninfo_all_blocks=1 00:04:55.033 --rc geninfo_unexecuted_blocks=1 00:04:55.033 00:04:55.033 ' 00:04:55.033 10:33:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:55.033 10:33:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58156 00:04:55.033 10:33:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:55.033 10:33:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.033 10:33:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58156 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58156 ']' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.033 10:33:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.293 [2024-11-18 10:33:20.980199] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:55.293 [2024-11-18 10:33:20.980364] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58156 ] 00:04:55.293 [2024-11-18 10:33:21.161986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.552 [2024-11-18 10:33:21.294315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.552 [2024-11-18 10:33:21.294564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.552 [2024-11-18 10:33:21.294680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.552 [2024-11-18 10:33:21.294711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:56.121 10:33:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.121 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.121 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.121 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.121 POWER: Cannot set governor of lcore 0 to performance 00:04:56.121 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.121 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.121 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.121 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.121 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:56.121 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:56.121 POWER: Unable to set Power Management Environment for lcore 0 00:04:56.121 [2024-11-18 10:33:21.807641] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:56.121 [2024-11-18 10:33:21.807663] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:56.121 [2024-11-18 10:33:21.807673] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.121 [2024-11-18 10:33:21.807695] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.121 [2024-11-18 10:33:21.807704] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.121 [2024-11-18 10:33:21.807713] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.121 10:33:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.121 10:33:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.380 [2024-11-18 10:33:22.182434] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.380 10:33:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.380 10:33:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.380 10:33:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.380 10:33:22 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.380 10:33:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.380 ************************************ 00:04:56.380 START TEST scheduler_create_thread 00:04:56.380 ************************************ 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.380 2 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.380 3 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.380 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.381 4 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.381 5 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.381 6 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.381 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.640 7 00:04:56.640 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.641 8 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.641 9 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.641 10 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.641 10:33:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.021 10:33:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.021 10:33:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:58.021 10:33:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:58.021 10:33:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.021 10:33:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.590 10:33:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.590 10:33:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:58.590 10:33:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.590 10:33:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.528 10:33:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.528 10:33:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.528 10:33:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.528 10:33:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.528 10:33:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.467 10:33:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.467 00:05:00.467 real 0m3.885s 00:05:00.467 user 0m0.032s 00:05:00.467 sys 0m0.004s 00:05:00.467 10:33:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.467 10:33:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.467 ************************************ 00:05:00.467 END TEST scheduler_create_thread 00:05:00.467 ************************************ 00:05:00.467 10:33:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.467 10:33:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58156 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58156 ']' 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58156 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58156 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.467 killing process with pid 58156 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58156' 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58156 00:05:00.467 10:33:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58156 00:05:00.727 [2024-11-18 10:33:26.460185] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:02.109 00:05:02.109 real 0m6.996s 00:05:02.109 user 0m14.236s 00:05:02.109 sys 0m0.620s 00:05:02.109 10:33:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.109 10:33:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.109 ************************************ 00:05:02.109 END TEST event_scheduler 00:05:02.109 ************************************ 00:05:02.109 10:33:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:02.109 10:33:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:02.109 10:33:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.109 10:33:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.109 10:33:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.109 ************************************ 00:05:02.109 START TEST app_repeat 00:05:02.109 ************************************ 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58283 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58283' 00:05:02.109 Process app_repeat pid: 58283 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.109 spdk_app_start Round 0 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:02.109 10:33:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.109 10:33:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.109 [2024-11-18 10:33:27.806419] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:02.109 [2024-11-18 10:33:27.806529] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:05:02.109 [2024-11-18 10:33:27.986480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.368 [2024-11-18 10:33:28.118191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.368 [2024-11-18 10:33:28.118265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.938 10:33:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.938 10:33:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.938 10:33:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.198 Malloc0 00:05:03.198 10:33:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.457 Malloc1 00:05:03.457 10:33:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.457 10:33:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.719 /dev/nbd0 00:05:03.719 10:33:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.719 10:33:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.719 1+0 records in 00:05:03.719 1+0 records out 00:05:03.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444396 s, 9.2 MB/s 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.719 10:33:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.719 10:33:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.719 10:33:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.719 10:33:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.998 /dev/nbd1 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.998 1+0 records in 00:05:03.998 1+0 records out 00:05:03.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378966 s, 10.8 MB/s 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.998 10:33:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.998 10:33:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.275 10:33:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.275 { 00:05:04.275 "nbd_device": "/dev/nbd0", 00:05:04.275 "bdev_name": "Malloc0" 00:05:04.275 }, 00:05:04.275 { 00:05:04.275 "nbd_device": "/dev/nbd1", 00:05:04.275 "bdev_name": "Malloc1" 00:05:04.275 } 00:05:04.275 ]' 00:05:04.275 10:33:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.275 { 00:05:04.275 "nbd_device": "/dev/nbd0", 00:05:04.275 "bdev_name": "Malloc0" 00:05:04.275 }, 00:05:04.275 { 00:05:04.275 "nbd_device": "/dev/nbd1", 00:05:04.275 "bdev_name": "Malloc1" 00:05:04.275 } 00:05:04.275 ]' 00:05:04.275 10:33:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.275 10:33:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.275 /dev/nbd1' 00:05:04.275 10:33:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.275 10:33:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.275 /dev/nbd1' 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.275 256+0 records in 00:05:04.275 256+0 records out 00:05:04.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121219 s, 86.5 MB/s 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.275 256+0 records in 00:05:04.275 256+0 records out 00:05:04.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230837 s, 45.4 MB/s 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.275 256+0 records in 00:05:04.275 256+0 records out 00:05:04.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255726 s, 41.0 MB/s 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.275 10:33:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.276 10:33:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.276 10:33:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.276 10:33:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.276 10:33:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.276 10:33:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.276 10:33:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.536 10:33:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.797 10:33:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.057 10:33:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.057 10:33:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.626 10:33:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.566 [2024-11-18 10:33:32.405280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.826 [2024-11-18 10:33:32.529592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.826 [2024-11-18 10:33:32.529596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.086 [2024-11-18 10:33:32.746022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.086 [2024-11-18 10:33:32.746104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.467 10:33:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.467 spdk_app_start Round 1 00:05:08.467 10:33:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:08.467 10:33:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:08.467 10:33:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:05:08.467 10:33:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.467 10:33:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.467 10:33:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.467 10:33:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.467 10:33:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.727 10:33:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.727 10:33:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.727 10:33:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.986 Malloc0 00:05:08.986 10:33:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.245 Malloc1 00:05:09.245 10:33:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.245 10:33:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.504 /dev/nbd0 00:05:09.504 10:33:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.504 10:33:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.504 10:33:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.504 10:33:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.504 10:33:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.504 10:33:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.504 10:33:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.504 10:33:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.505 1+0 records in 00:05:09.505 1+0 records out 00:05:09.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304765 s, 13.4 MB/s 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.505 10:33:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.505 10:33:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.505 10:33:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.505 10:33:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.764 /dev/nbd1 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.764 1+0 records in 00:05:09.764 1+0 records out 00:05:09.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200948 s, 20.4 MB/s 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.764 10:33:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.764 10:33:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.024 { 00:05:10.024 "nbd_device": "/dev/nbd0", 00:05:10.024 "bdev_name": "Malloc0" 00:05:10.024 }, 00:05:10.024 { 00:05:10.024 "nbd_device": "/dev/nbd1", 00:05:10.024 "bdev_name": "Malloc1" 00:05:10.024 } 00:05:10.024 ]' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.024 { 00:05:10.024 "nbd_device": "/dev/nbd0", 00:05:10.024 "bdev_name": "Malloc0" 00:05:10.024 }, 00:05:10.024 { 00:05:10.024 "nbd_device": "/dev/nbd1", 00:05:10.024 "bdev_name": "Malloc1" 00:05:10.024 } 00:05:10.024 ]' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.024 /dev/nbd1' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.024 /dev/nbd1' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.024 256+0 records in 00:05:10.024 256+0 records out 00:05:10.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0081252 s, 129 MB/s 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.024 256+0 records in 00:05:10.024 256+0 records out 00:05:10.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228882 s, 45.8 MB/s 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.024 256+0 records in 00:05:10.024 256+0 records out 00:05:10.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273611 s, 38.3 MB/s 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.024 10:33:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.284 10:33:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.543 10:33:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.803 10:33:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.803 10:33:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.373 10:33:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.311 [2024-11-18 10:33:38.162859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.570 [2024-11-18 10:33:38.271612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.570 [2024-11-18 10:33:38.271635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.831 [2024-11-18 10:33:38.486385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.831 [2024-11-18 10:33:38.486470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.216 spdk_app_start Round 2 00:05:14.216 10:33:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.216 10:33:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.216 10:33:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:14.216 10:33:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:05:14.216 10:33:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.216 10:33:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.216 10:33:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.216 10:33:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.216 10:33:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.483 10:33:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.483 10:33:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.483 10:33:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.742 Malloc0 00:05:14.742 10:33:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.002 Malloc1 00:05:15.002 10:33:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.002 10:33:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.003 10:33:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.003 10:33:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.003 10:33:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.003 10:33:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.263 /dev/nbd0 00:05:15.263 10:33:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.263 10:33:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.263 1+0 records in 00:05:15.263 1+0 records out 00:05:15.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637239 s, 6.4 MB/s 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.263 10:33:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.263 10:33:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.263 10:33:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.263 10:33:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.263 10:33:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.263 10:33:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.523 /dev/nbd1 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.523 1+0 records in 00:05:15.523 1+0 records out 00:05:15.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393871 s, 10.4 MB/s 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.523 10:33:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.523 10:33:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.784 { 00:05:15.784 "nbd_device": "/dev/nbd0", 00:05:15.784 "bdev_name": "Malloc0" 00:05:15.784 }, 00:05:15.784 { 00:05:15.784 "nbd_device": "/dev/nbd1", 00:05:15.784 "bdev_name": "Malloc1" 00:05:15.784 } 00:05:15.784 ]' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.784 { 00:05:15.784 "nbd_device": "/dev/nbd0", 00:05:15.784 "bdev_name": "Malloc0" 00:05:15.784 }, 00:05:15.784 { 00:05:15.784 "nbd_device": "/dev/nbd1", 00:05:15.784 "bdev_name": "Malloc1" 00:05:15.784 } 00:05:15.784 ]' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.784 /dev/nbd1' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.784 /dev/nbd1' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.784 256+0 records in 00:05:15.784 256+0 records out 00:05:15.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129728 s, 80.8 MB/s 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.784 256+0 records in 00:05:15.784 256+0 records out 00:05:15.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264431 s, 39.7 MB/s 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.784 256+0 records in 00:05:15.784 256+0 records out 00:05:15.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029216 s, 35.9 MB/s 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.784 10:33:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.044 10:33:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.304 10:33:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.564 10:33:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.564 10:33:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.133 10:33:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.514 [2024-11-18 10:33:43.975759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.514 [2024-11-18 10:33:44.108283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.514 [2024-11-18 10:33:44.108284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.514 [2024-11-18 10:33:44.327673] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.514 [2024-11-18 10:33:44.327852] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.897 10:33:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58283 /var/tmp/spdk-nbd.sock 00:05:19.897 10:33:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:05:19.897 10:33:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.897 10:33:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.897 10:33:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.897 10:33:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.897 10:33:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.156 10:33:45 event.app_repeat -- event/event.sh@39 -- # killprocess 58283 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58283 ']' 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58283 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.156 10:33:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58283 00:05:20.156 killing process with pid 58283 00:05:20.156 10:33:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.156 10:33:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.156 10:33:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58283' 00:05:20.156 10:33:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58283 00:05:20.156 10:33:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58283 00:05:21.535 spdk_app_start is called in Round 0. 00:05:21.535 Shutdown signal received, stop current app iteration 00:05:21.535 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:21.535 spdk_app_start is called in Round 1. 00:05:21.535 Shutdown signal received, stop current app iteration 00:05:21.535 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:21.535 spdk_app_start is called in Round 2. 00:05:21.535 Shutdown signal received, stop current app iteration 00:05:21.535 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:21.535 spdk_app_start is called in Round 3. 00:05:21.535 Shutdown signal received, stop current app iteration 00:05:21.535 ************************************ 00:05:21.535 END TEST app_repeat 00:05:21.535 ************************************ 00:05:21.535 10:33:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:21.535 10:33:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:21.535 00:05:21.535 real 0m19.365s 00:05:21.535 user 0m41.008s 00:05:21.535 sys 0m3.040s 00:05:21.535 10:33:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.535 10:33:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.535 10:33:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:21.535 10:33:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:21.535 10:33:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.535 10:33:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.535 10:33:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.535 ************************************ 00:05:21.535 START TEST cpu_locks 00:05:21.535 ************************************ 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:21.535 * Looking for test storage... 00:05:21.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.535 10:33:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.535 --rc genhtml_branch_coverage=1 00:05:21.535 --rc genhtml_function_coverage=1 00:05:21.535 --rc genhtml_legend=1 00:05:21.535 --rc geninfo_all_blocks=1 00:05:21.535 --rc geninfo_unexecuted_blocks=1 00:05:21.535 00:05:21.535 ' 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.535 --rc genhtml_branch_coverage=1 00:05:21.535 --rc genhtml_function_coverage=1 00:05:21.535 --rc genhtml_legend=1 00:05:21.535 --rc geninfo_all_blocks=1 00:05:21.535 --rc geninfo_unexecuted_blocks=1 00:05:21.535 00:05:21.535 ' 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.535 --rc genhtml_branch_coverage=1 00:05:21.535 --rc genhtml_function_coverage=1 00:05:21.535 --rc genhtml_legend=1 00:05:21.535 --rc geninfo_all_blocks=1 00:05:21.535 --rc geninfo_unexecuted_blocks=1 00:05:21.535 00:05:21.535 ' 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.535 --rc genhtml_branch_coverage=1 00:05:21.535 --rc genhtml_function_coverage=1 00:05:21.535 --rc genhtml_legend=1 00:05:21.535 --rc geninfo_all_blocks=1 00:05:21.535 --rc geninfo_unexecuted_blocks=1 00:05:21.535 00:05:21.535 ' 00:05:21.535 10:33:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:21.535 10:33:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:21.535 10:33:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:21.535 10:33:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.535 10:33:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.535 ************************************ 00:05:21.535 START TEST default_locks 00:05:21.535 ************************************ 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58726 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58726 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58726 ']' 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.535 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.536 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.536 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.536 10:33:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.795 [2024-11-18 10:33:47.510275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:21.795 [2024-11-18 10:33:47.510486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58726 ] 00:05:22.055 [2024-11-18 10:33:47.688562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.055 [2024-11-18 10:33:47.824030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.995 10:33:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.995 10:33:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:22.995 10:33:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58726 00:05:22.995 10:33:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58726 00:05:22.995 10:33:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58726 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58726 ']' 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58726 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58726 00:05:23.563 killing process with pid 58726 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58726' 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58726 00:05:23.563 10:33:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58726 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58726 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58726 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58726 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58726 ']' 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 ERROR: process (pid: 58726) is no longer running 00:05:26.107 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58726) - No such process 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.107 00:05:26.107 real 0m4.327s 00:05:26.107 user 0m4.068s 00:05:26.107 sys 0m0.800s 00:05:26.107 ************************************ 00:05:26.107 END TEST default_locks 00:05:26.107 ************************************ 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.107 10:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 10:33:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:26.107 10:33:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.107 10:33:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.107 10:33:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 ************************************ 00:05:26.107 START TEST default_locks_via_rpc 00:05:26.107 ************************************ 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58801 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58801 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58801 ']' 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.107 10:33:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 [2024-11-18 10:33:51.908255] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:26.107 [2024-11-18 10:33:51.908486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:05:26.367 [2024-11-18 10:33:52.074782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.367 [2024-11-18 10:33:52.207687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.307 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.566 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.566 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58801 00:05:27.566 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58801 00:05:27.566 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58801 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58801 ']' 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58801 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58801 00:05:27.826 killing process with pid 58801 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58801' 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58801 00:05:27.826 10:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58801 00:05:30.365 ************************************ 00:05:30.365 END TEST default_locks_via_rpc 00:05:30.365 ************************************ 00:05:30.365 00:05:30.365 real 0m4.194s 00:05:30.365 user 0m3.956s 00:05:30.365 sys 0m0.768s 00:05:30.365 10:33:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.365 10:33:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 10:33:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:30.365 10:33:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.365 10:33:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.365 10:33:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 ************************************ 00:05:30.365 START TEST non_locking_app_on_locked_coremask 00:05:30.365 ************************************ 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58875 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58875 /var/tmp/spdk.sock 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58875 ']' 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.365 10:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.365 [2024-11-18 10:33:56.166621] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:30.365 [2024-11-18 10:33:56.166845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:05:30.625 [2024-11-18 10:33:56.338473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.625 [2024-11-18 10:33:56.468494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58896 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58896 /var/tmp/spdk2.sock 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58896 ']' 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.564 10:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.824 [2024-11-18 10:33:57.539080] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.824 [2024-11-18 10:33:57.539316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:05:32.084 [2024-11-18 10:33:57.710689] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.084 [2024-11-18 10:33:57.710742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.343 [2024-11-18 10:33:57.979629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.251 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.251 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.251 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58875 00:05:34.251 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58875 00:05:34.251 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58875 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58875 ']' 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58875 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58875 00:05:35.191 killing process with pid 58875 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58875' 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58875 00:05:35.191 10:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58875 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58896 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58896 ']' 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58896 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58896 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58896' 00:05:40.506 killing process with pid 58896 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58896 00:05:40.506 10:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58896 00:05:43.046 00:05:43.046 real 0m12.248s 00:05:43.046 user 0m12.107s 00:05:43.046 sys 0m1.724s 00:05:43.046 10:34:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.046 ************************************ 00:05:43.046 END TEST non_locking_app_on_locked_coremask 00:05:43.046 ************************************ 00:05:43.046 10:34:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.046 10:34:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.046 10:34:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.046 10:34:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.046 10:34:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.046 ************************************ 00:05:43.046 START TEST locking_app_on_unlocked_coremask 00:05:43.046 ************************************ 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59053 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59053 /var/tmp/spdk.sock 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59053 ']' 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.046 10:34:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.046 [2024-11-18 10:34:08.485748] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:43.046 [2024-11-18 10:34:08.485975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59053 ] 00:05:43.046 [2024-11-18 10:34:08.660687] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.046 [2024-11-18 10:34:08.660868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.046 [2024-11-18 10:34:08.793884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59069 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59069 /var/tmp/spdk2.sock 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59069 ']' 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.986 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.987 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.987 10:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.987 [2024-11-18 10:34:09.857435] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:43.987 [2024-11-18 10:34:09.857658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:05:44.246 [2024-11-18 10:34:10.028206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.506 [2024-11-18 10:34:10.296280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59069 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59069 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59053 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59053 ']' 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59053 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59053 00:05:47.046 killing process with pid 59053 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59053' 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59053 00:05:47.046 10:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59053 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59069 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59069 ']' 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59069 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59069 00:05:52.325 killing process with pid 59069 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59069' 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59069 00:05:52.325 10:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59069 00:05:54.888 00:05:54.888 real 0m11.892s 00:05:54.888 user 0m11.739s 00:05:54.888 sys 0m1.565s 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.888 ************************************ 00:05:54.888 END TEST locking_app_on_unlocked_coremask 00:05:54.888 ************************************ 00:05:54.888 10:34:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.888 10:34:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.888 10:34:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.888 10:34:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.888 ************************************ 00:05:54.888 START TEST locking_app_on_locked_coremask 00:05:54.888 ************************************ 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59225 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59225 /var/tmp/spdk.sock 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59225 ']' 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.888 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.889 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.889 10:34:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.889 [2024-11-18 10:34:20.450415] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:54.889 [2024-11-18 10:34:20.450637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:05:54.889 [2024-11-18 10:34:20.628867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.889 [2024-11-18 10:34:20.762002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59241 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59241 /var/tmp/spdk2.sock 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59241 /var/tmp/spdk2.sock 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59241 /var/tmp/spdk2.sock 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.271 10:34:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.271 [2024-11-18 10:34:21.829608] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:56.271 [2024-11-18 10:34:21.829808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:05:56.271 [2024-11-18 10:34:21.995566] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59225 has claimed it. 00:05:56.271 [2024-11-18 10:34:21.995631] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.841 ERROR: process (pid: 59241) is no longer running 00:05:56.841 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59241) - No such process 00:05:56.841 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.841 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:56.841 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:56.842 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.842 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.842 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.842 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59225 00:05:56.842 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59225 00:05:56.842 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59225 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59225 ']' 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59225 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59225 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.102 killing process with pid 59225 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59225' 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59225 00:05:57.102 10:34:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59225 00:05:59.644 00:05:59.644 real 0m5.052s 00:05:59.644 user 0m5.007s 00:05:59.644 sys 0m1.004s 00:05:59.644 10:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.644 ************************************ 00:05:59.644 END TEST locking_app_on_locked_coremask 00:05:59.644 ************************************ 00:05:59.644 10:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.644 10:34:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.644 10:34:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.644 10:34:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.644 10:34:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.644 ************************************ 00:05:59.644 START TEST locking_overlapped_coremask 00:05:59.644 ************************************ 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59316 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59316 /var/tmp/spdk.sock 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59316 ']' 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.644 10:34:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.904 [2024-11-18 10:34:25.554214] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:59.904 [2024-11-18 10:34:25.554440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:05:59.904 [2024-11-18 10:34:25.727832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.164 [2024-11-18 10:34:25.870556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.164 [2024-11-18 10:34:25.870694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.164 [2024-11-18 10:34:25.870742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59334 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59334 /var/tmp/spdk2.sock 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59334 /var/tmp/spdk2.sock 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59334 /var/tmp/spdk2.sock 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59334 ']' 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.103 10:34:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.103 [2024-11-18 10:34:26.955136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:01.103 [2024-11-18 10:34:26.955379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59334 ] 00:06:01.363 [2024-11-18 10:34:27.125449] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59316 has claimed it. 00:06:01.363 [2024-11-18 10:34:27.125514] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.932 ERROR: process (pid: 59334) is no longer running 00:06:01.932 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59334) - No such process 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59316 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59316 ']' 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59316 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59316 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59316' 00:06:01.932 killing process with pid 59316 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59316 00:06:01.932 10:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59316 00:06:04.474 00:06:04.474 real 0m4.681s 00:06:04.474 user 0m12.532s 00:06:04.474 sys 0m0.767s 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 ************************************ 00:06:04.474 END TEST locking_overlapped_coremask 00:06:04.474 ************************************ 00:06:04.474 10:34:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.474 10:34:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.474 10:34:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.474 10:34:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 ************************************ 00:06:04.474 START TEST locking_overlapped_coremask_via_rpc 00:06:04.474 ************************************ 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59398 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59398 /var/tmp/spdk.sock 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59398 ']' 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.474 10:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 [2024-11-18 10:34:30.324067] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:04.474 [2024-11-18 10:34:30.324214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59398 ] 00:06:04.734 [2024-11-18 10:34:30.506653] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.734 [2024-11-18 10:34:30.506708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.994 [2024-11-18 10:34:30.647303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.994 [2024-11-18 10:34:30.647452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.994 [2024-11-18 10:34:30.647492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59427 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59427 /var/tmp/spdk2.sock 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59427 ']' 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.931 10:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.931 [2024-11-18 10:34:31.735887] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.931 [2024-11-18 10:34:31.736117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:06:06.190 [2024-11-18 10:34:31.902588] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.190 [2024-11-18 10:34:31.902638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.449 [2024-11-18 10:34:32.202027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.449 [2024-11-18 10:34:32.205248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.449 [2024-11-18 10:34:32.205257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.999 [2024-11-18 10:34:34.307344] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59398 has claimed it. 00:06:08.999 request: 00:06:08.999 { 00:06:08.999 "method": "framework_enable_cpumask_locks", 00:06:08.999 "req_id": 1 00:06:08.999 } 00:06:08.999 Got JSON-RPC error response 00:06:08.999 response: 00:06:08.999 { 00:06:08.999 "code": -32603, 00:06:08.999 "message": "Failed to claim CPU core: 2" 00:06:08.999 } 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59398 /var/tmp/spdk.sock 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59398 ']' 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59427 /var/tmp/spdk2.sock 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59427 ']' 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.999 00:06:08.999 real 0m4.548s 00:06:08.999 user 0m1.257s 00:06:08.999 sys 0m0.198s 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.999 10:34:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.999 ************************************ 00:06:08.999 END TEST locking_overlapped_coremask_via_rpc 00:06:08.999 ************************************ 00:06:08.999 10:34:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:08.999 10:34:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59398 ]] 00:06:08.999 10:34:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59398 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59398 ']' 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59398 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59398 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59398' 00:06:08.999 killing process with pid 59398 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59398 00:06:08.999 10:34:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59398 00:06:12.291 10:34:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59427 ]] 00:06:12.291 10:34:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59427 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59427 ']' 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59427 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59427 00:06:12.291 killing process with pid 59427 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59427' 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59427 00:06:12.291 10:34:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59427 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59398 ]] 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59398 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59398 ']' 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59398 00:06:14.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59398) - No such process 00:06:14.828 Process with pid 59398 is not found 00:06:14.828 Process with pid 59427 is not found 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59398 is not found' 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59427 ]] 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59427 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59427 ']' 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59427 00:06:14.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59427) - No such process 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59427 is not found' 00:06:14.828 10:34:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.828 00:06:14.828 real 0m52.985s 00:06:14.828 user 1m28.330s 00:06:14.828 sys 0m8.412s 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.828 10:34:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.828 ************************************ 00:06:14.828 END TEST cpu_locks 00:06:14.828 ************************************ 00:06:14.828 00:06:14.828 real 1m24.798s 00:06:14.828 user 2m30.956s 00:06:14.828 sys 0m12.826s 00:06:14.828 10:34:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.828 10:34:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.828 ************************************ 00:06:14.828 END TEST event 00:06:14.828 ************************************ 00:06:14.828 10:34:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.828 10:34:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.828 10:34:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.828 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:06:14.828 ************************************ 00:06:14.828 START TEST thread 00:06:14.828 ************************************ 00:06:14.828 10:34:40 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.828 * Looking for test storage... 00:06:14.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:14.828 10:34:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.828 10:34:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.828 10:34:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.828 10:34:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.828 10:34:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.828 10:34:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.828 10:34:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.828 10:34:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.828 10:34:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.828 10:34:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.828 10:34:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.828 10:34:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.828 10:34:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.828 10:34:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.828 10:34:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.828 10:34:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:14.828 10:34:40 thread -- scripts/common.sh@345 -- # : 1 00:06:14.828 10:34:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.828 10:34:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.828 10:34:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:14.828 10:34:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:14.828 10:34:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.828 10:34:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:14.828 10:34:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.828 10:34:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:14.828 10:34:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:14.828 10:34:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.828 10:34:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:14.828 10:34:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.829 10:34:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.829 10:34:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.829 10:34:40 thread -- scripts/common.sh@368 -- # return 0 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.829 --rc genhtml_branch_coverage=1 00:06:14.829 --rc genhtml_function_coverage=1 00:06:14.829 --rc genhtml_legend=1 00:06:14.829 --rc geninfo_all_blocks=1 00:06:14.829 --rc geninfo_unexecuted_blocks=1 00:06:14.829 00:06:14.829 ' 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.829 --rc genhtml_branch_coverage=1 00:06:14.829 --rc genhtml_function_coverage=1 00:06:14.829 --rc genhtml_legend=1 00:06:14.829 --rc geninfo_all_blocks=1 00:06:14.829 --rc geninfo_unexecuted_blocks=1 00:06:14.829 00:06:14.829 ' 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.829 --rc genhtml_branch_coverage=1 00:06:14.829 --rc genhtml_function_coverage=1 00:06:14.829 --rc genhtml_legend=1 00:06:14.829 --rc geninfo_all_blocks=1 00:06:14.829 --rc geninfo_unexecuted_blocks=1 00:06:14.829 00:06:14.829 ' 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.829 --rc genhtml_branch_coverage=1 00:06:14.829 --rc genhtml_function_coverage=1 00:06:14.829 --rc genhtml_legend=1 00:06:14.829 --rc geninfo_all_blocks=1 00:06:14.829 --rc geninfo_unexecuted_blocks=1 00:06:14.829 00:06:14.829 ' 00:06:14.829 10:34:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.829 10:34:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.829 ************************************ 00:06:14.829 START TEST thread_poller_perf 00:06:14.829 ************************************ 00:06:14.829 10:34:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.829 [2024-11-18 10:34:40.549489] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:14.829 [2024-11-18 10:34:40.549649] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59622 ] 00:06:15.089 [2024-11-18 10:34:40.754446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.089 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.089 [2024-11-18 10:34:40.895471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.471 [2024-11-18T10:34:42.356Z] ====================================== 00:06:16.471 [2024-11-18T10:34:42.356Z] busy:2301490368 (cyc) 00:06:16.471 [2024-11-18T10:34:42.356Z] total_run_count: 408000 00:06:16.471 [2024-11-18T10:34:42.356Z] tsc_hz: 2290000000 (cyc) 00:06:16.471 [2024-11-18T10:34:42.356Z] ====================================== 00:06:16.471 [2024-11-18T10:34:42.356Z] poller_cost: 5640 (cyc), 2462 (nsec) 00:06:16.471 00:06:16.471 real 0m1.637s 00:06:16.471 user 0m1.419s 00:06:16.471 sys 0m0.110s 00:06:16.471 ************************************ 00:06:16.471 END TEST thread_poller_perf 00:06:16.471 ************************************ 00:06:16.471 10:34:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.471 10:34:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 10:34:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.471 10:34:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.471 10:34:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.471 10:34:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 ************************************ 00:06:16.471 START TEST thread_poller_perf 00:06:16.471 ************************************ 00:06:16.471 10:34:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.471 [2024-11-18 10:34:42.257749] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:16.471 [2024-11-18 10:34:42.257850] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59664 ] 00:06:16.731 [2024-11-18 10:34:42.437635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.731 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.731 [2024-11-18 10:34:42.572389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.115 [2024-11-18T10:34:44.000Z] ====================================== 00:06:18.115 [2024-11-18T10:34:44.000Z] busy:2293850156 (cyc) 00:06:18.115 [2024-11-18T10:34:44.000Z] total_run_count: 5516000 00:06:18.115 [2024-11-18T10:34:44.000Z] tsc_hz: 2290000000 (cyc) 00:06:18.115 [2024-11-18T10:34:44.000Z] ====================================== 00:06:18.115 [2024-11-18T10:34:44.000Z] poller_cost: 415 (cyc), 181 (nsec) 00:06:18.115 00:06:18.115 real 0m1.604s 00:06:18.115 user 0m1.374s 00:06:18.115 sys 0m0.123s 00:06:18.115 10:34:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.115 10:34:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.115 ************************************ 00:06:18.115 END TEST thread_poller_perf 00:06:18.115 ************************************ 00:06:18.115 10:34:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.115 ************************************ 00:06:18.115 END TEST thread 00:06:18.115 ************************************ 00:06:18.115 00:06:18.115 real 0m3.603s 00:06:18.115 user 0m2.944s 00:06:18.115 sys 0m0.459s 00:06:18.115 10:34:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.115 10:34:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.115 10:34:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:18.115 10:34:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.115 10:34:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.115 10:34:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.115 10:34:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.115 ************************************ 00:06:18.115 START TEST app_cmdline 00:06:18.115 ************************************ 00:06:18.115 10:34:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.375 * Looking for test storage... 00:06:18.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.375 10:34:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.375 --rc genhtml_branch_coverage=1 00:06:18.375 --rc genhtml_function_coverage=1 00:06:18.375 --rc genhtml_legend=1 00:06:18.375 --rc geninfo_all_blocks=1 00:06:18.375 --rc geninfo_unexecuted_blocks=1 00:06:18.375 00:06:18.375 ' 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.375 --rc genhtml_branch_coverage=1 00:06:18.375 --rc genhtml_function_coverage=1 00:06:18.375 --rc genhtml_legend=1 00:06:18.375 --rc geninfo_all_blocks=1 00:06:18.375 --rc geninfo_unexecuted_blocks=1 00:06:18.375 00:06:18.375 ' 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.375 --rc genhtml_branch_coverage=1 00:06:18.375 --rc genhtml_function_coverage=1 00:06:18.375 --rc genhtml_legend=1 00:06:18.375 --rc geninfo_all_blocks=1 00:06:18.375 --rc geninfo_unexecuted_blocks=1 00:06:18.375 00:06:18.375 ' 00:06:18.375 10:34:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.375 --rc genhtml_branch_coverage=1 00:06:18.375 --rc genhtml_function_coverage=1 00:06:18.375 --rc genhtml_legend=1 00:06:18.375 --rc geninfo_all_blocks=1 00:06:18.375 --rc geninfo_unexecuted_blocks=1 00:06:18.375 00:06:18.375 ' 00:06:18.375 10:34:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.375 10:34:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59752 00:06:18.376 10:34:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.376 10:34:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59752 00:06:18.376 10:34:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59752 ']' 00:06:18.376 10:34:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.376 10:34:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.376 10:34:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.376 10:34:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.376 10:34:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.636 [2024-11-18 10:34:44.281547] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:18.636 [2024-11-18 10:34:44.281744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59752 ] 00:06:18.636 [2024-11-18 10:34:44.460194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.897 [2024-11-18 10:34:44.590896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.837 10:34:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.837 10:34:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:19.837 10:34:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:20.097 { 00:06:20.097 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:20.097 "fields": { 00:06:20.097 "major": 25, 00:06:20.097 "minor": 1, 00:06:20.097 "patch": 0, 00:06:20.097 "suffix": "-pre", 00:06:20.097 "commit": "83e8405e4" 00:06:20.097 } 00:06:20.097 } 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.097 10:34:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:20.097 10:34:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.097 request: 00:06:20.097 { 00:06:20.097 "method": "env_dpdk_get_mem_stats", 00:06:20.097 "req_id": 1 00:06:20.097 } 00:06:20.097 Got JSON-RPC error response 00:06:20.097 response: 00:06:20.097 { 00:06:20.097 "code": -32601, 00:06:20.097 "message": "Method not found" 00:06:20.097 } 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.362 10:34:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59752 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59752 ']' 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59752 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.362 10:34:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59752 00:06:20.362 killing process with pid 59752 00:06:20.362 10:34:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.362 10:34:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.362 10:34:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59752' 00:06:20.362 10:34:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 59752 00:06:20.362 10:34:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 59752 00:06:22.906 00:06:22.906 real 0m4.571s 00:06:22.906 user 0m4.570s 00:06:22.906 sys 0m0.782s 00:06:22.906 10:34:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.906 10:34:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.906 ************************************ 00:06:22.906 END TEST app_cmdline 00:06:22.906 ************************************ 00:06:22.906 10:34:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:22.906 10:34:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.906 10:34:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.906 10:34:48 -- common/autotest_common.sh@10 -- # set +x 00:06:22.906 ************************************ 00:06:22.906 START TEST version 00:06:22.906 ************************************ 00:06:22.906 10:34:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:22.906 * Looking for test storage... 00:06:22.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:22.906 10:34:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.906 10:34:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.906 10:34:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.906 10:34:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.906 10:34:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.906 10:34:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.906 10:34:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.906 10:34:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.906 10:34:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.906 10:34:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.906 10:34:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.906 10:34:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.906 10:34:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.906 10:34:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.906 10:34:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.906 10:34:48 version -- scripts/common.sh@344 -- # case "$op" in 00:06:22.906 10:34:48 version -- scripts/common.sh@345 -- # : 1 00:06:22.906 10:34:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.906 10:34:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.166 10:34:48 version -- scripts/common.sh@365 -- # decimal 1 00:06:23.166 10:34:48 version -- scripts/common.sh@353 -- # local d=1 00:06:23.166 10:34:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.166 10:34:48 version -- scripts/common.sh@355 -- # echo 1 00:06:23.166 10:34:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.166 10:34:48 version -- scripts/common.sh@366 -- # decimal 2 00:06:23.166 10:34:48 version -- scripts/common.sh@353 -- # local d=2 00:06:23.166 10:34:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.166 10:34:48 version -- scripts/common.sh@355 -- # echo 2 00:06:23.166 10:34:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.166 10:34:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.166 10:34:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.166 10:34:48 version -- scripts/common.sh@368 -- # return 0 00:06:23.166 10:34:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.166 10:34:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.166 --rc genhtml_branch_coverage=1 00:06:23.166 --rc genhtml_function_coverage=1 00:06:23.166 --rc genhtml_legend=1 00:06:23.166 --rc geninfo_all_blocks=1 00:06:23.166 --rc geninfo_unexecuted_blocks=1 00:06:23.166 00:06:23.166 ' 00:06:23.166 10:34:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.166 --rc genhtml_branch_coverage=1 00:06:23.166 --rc genhtml_function_coverage=1 00:06:23.166 --rc genhtml_legend=1 00:06:23.166 --rc geninfo_all_blocks=1 00:06:23.166 --rc geninfo_unexecuted_blocks=1 00:06:23.166 00:06:23.166 ' 00:06:23.166 10:34:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.166 --rc genhtml_branch_coverage=1 00:06:23.166 --rc genhtml_function_coverage=1 00:06:23.166 --rc genhtml_legend=1 00:06:23.166 --rc geninfo_all_blocks=1 00:06:23.166 --rc geninfo_unexecuted_blocks=1 00:06:23.166 00:06:23.166 ' 00:06:23.166 10:34:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.166 --rc genhtml_branch_coverage=1 00:06:23.166 --rc genhtml_function_coverage=1 00:06:23.166 --rc genhtml_legend=1 00:06:23.166 --rc geninfo_all_blocks=1 00:06:23.166 --rc geninfo_unexecuted_blocks=1 00:06:23.166 00:06:23.166 ' 00:06:23.166 10:34:48 version -- app/version.sh@17 -- # get_header_version major 00:06:23.166 10:34:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # cut -f2 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.166 10:34:48 version -- app/version.sh@17 -- # major=25 00:06:23.166 10:34:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.166 10:34:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # cut -f2 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.166 10:34:48 version -- app/version.sh@18 -- # minor=1 00:06:23.166 10:34:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.166 10:34:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # cut -f2 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.166 10:34:48 version -- app/version.sh@19 -- # patch=0 00:06:23.166 10:34:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.166 10:34:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.166 10:34:48 version -- app/version.sh@14 -- # cut -f2 00:06:23.167 10:34:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.167 10:34:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.167 10:34:48 version -- app/version.sh@22 -- # version=25.1 00:06:23.167 10:34:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.167 10:34:48 version -- app/version.sh@28 -- # version=25.1rc0 00:06:23.167 10:34:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:23.167 10:34:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.167 10:34:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:23.167 10:34:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:23.167 ************************************ 00:06:23.167 END TEST version 00:06:23.167 ************************************ 00:06:23.167 00:06:23.167 real 0m0.320s 00:06:23.167 user 0m0.198s 00:06:23.167 sys 0m0.181s 00:06:23.167 10:34:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.167 10:34:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.167 10:34:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:23.167 10:34:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:23.167 10:34:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:23.167 10:34:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.167 10:34:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.167 10:34:48 -- common/autotest_common.sh@10 -- # set +x 00:06:23.167 ************************************ 00:06:23.167 START TEST bdev_raid 00:06:23.167 ************************************ 00:06:23.167 10:34:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:23.434 * Looking for test storage... 00:06:23.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.435 10:34:49 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.435 --rc genhtml_branch_coverage=1 00:06:23.435 --rc genhtml_function_coverage=1 00:06:23.435 --rc genhtml_legend=1 00:06:23.435 --rc geninfo_all_blocks=1 00:06:23.435 --rc geninfo_unexecuted_blocks=1 00:06:23.435 00:06:23.435 ' 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.435 --rc genhtml_branch_coverage=1 00:06:23.435 --rc genhtml_function_coverage=1 00:06:23.435 --rc genhtml_legend=1 00:06:23.435 --rc geninfo_all_blocks=1 00:06:23.435 --rc geninfo_unexecuted_blocks=1 00:06:23.435 00:06:23.435 ' 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.435 --rc genhtml_branch_coverage=1 00:06:23.435 --rc genhtml_function_coverage=1 00:06:23.435 --rc genhtml_legend=1 00:06:23.435 --rc geninfo_all_blocks=1 00:06:23.435 --rc geninfo_unexecuted_blocks=1 00:06:23.435 00:06:23.435 ' 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.435 --rc genhtml_branch_coverage=1 00:06:23.435 --rc genhtml_function_coverage=1 00:06:23.435 --rc genhtml_legend=1 00:06:23.435 --rc geninfo_all_blocks=1 00:06:23.435 --rc geninfo_unexecuted_blocks=1 00:06:23.435 00:06:23.435 ' 00:06:23.435 10:34:49 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:23.435 10:34:49 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.435 10:34:49 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:23.435 10:34:49 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:23.435 10:34:49 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:23.435 10:34:49 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:23.435 10:34:49 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.435 10:34:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.435 ************************************ 00:06:23.435 START TEST raid1_resize_data_offset_test 00:06:23.435 ************************************ 00:06:23.435 Process raid pid: 59941 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59941 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59941' 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59941 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59941 ']' 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.435 10:34:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.435 [2024-11-18 10:34:49.300043] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:23.435 [2024-11-18 10:34:49.300283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.696 [2024-11-18 10:34:49.474970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.956 [2024-11-18 10:34:49.611153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.956 [2024-11-18 10:34:49.836237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.956 [2024-11-18 10:34:49.836376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 malloc0 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 malloc1 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 null0 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 [2024-11-18 10:34:50.322867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:24.526 [2024-11-18 10:34:50.324865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:24.526 [2024-11-18 10:34:50.324952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:24.526 [2024-11-18 10:34:50.325109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:24.526 [2024-11-18 10:34:50.325153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:24.526 [2024-11-18 10:34:50.325433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:24.526 [2024-11-18 10:34:50.325660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:24.526 [2024-11-18 10:34:50.325707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:24.526 [2024-11-18 10:34:50.325885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 [2024-11-18 10:34:50.382727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.526 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.466 malloc2 00:06:25.466 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.466 10:34:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:25.466 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.466 10:34:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.466 [2024-11-18 10:34:51.002552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:25.466 [2024-11-18 10:34:51.021052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.466 [2024-11-18 10:34:51.023083] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59941 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59941 ']' 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59941 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59941 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59941' 00:06:25.466 killing process with pid 59941 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59941 00:06:25.466 [2024-11-18 10:34:51.116456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:25.466 10:34:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59941 00:06:25.466 [2024-11-18 10:34:51.116709] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:25.466 [2024-11-18 10:34:51.116823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.466 [2024-11-18 10:34:51.116892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:25.466 [2024-11-18 10:34:51.153979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.466 [2024-11-18 10:34:51.154411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.466 [2024-11-18 10:34:51.154478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:27.375 [2024-11-18 10:34:53.032844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:28.316 10:34:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:28.316 00:06:28.316 real 0m4.986s 00:06:28.316 user 0m4.699s 00:06:28.316 sys 0m0.712s 00:06:28.316 10:34:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.316 10:34:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.576 ************************************ 00:06:28.576 END TEST raid1_resize_data_offset_test 00:06:28.576 ************************************ 00:06:28.576 10:34:54 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:28.576 10:34:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.576 10:34:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.576 10:34:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:28.576 ************************************ 00:06:28.576 START TEST raid0_resize_superblock_test 00:06:28.576 ************************************ 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:28.576 Process raid pid: 60030 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60030 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60030' 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60030 00:06:28.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60030 ']' 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.576 10:34:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.576 [2024-11-18 10:34:54.352511] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:28.576 [2024-11-18 10:34:54.352613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.836 [2024-11-18 10:34:54.524703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.836 [2024-11-18 10:34:54.655383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.096 [2024-11-18 10:34:54.890981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.096 [2024-11-18 10:34:54.891016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.355 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.355 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:29.355 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:29.355 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.355 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.924 malloc0 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.924 [2024-11-18 10:34:55.775425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:29.924 [2024-11-18 10:34:55.775511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.924 [2024-11-18 10:34:55.775538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:29.924 [2024-11-18 10:34:55.775550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.924 [2024-11-18 10:34:55.777980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.924 [2024-11-18 10:34:55.778033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:29.924 pt0 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.924 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 03851d9a-4254-4304-b3aa-7f114af6fecd 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 ad3cd528-ce66-4ebd-b63a-8f2098471b7a 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 9a3ced84-68a1-446e-aae2-5eb9cbc7df23 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 [2024-11-18 10:34:55.983346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ad3cd528-ce66-4ebd-b63a-8f2098471b7a is claimed 00:06:30.184 [2024-11-18 10:34:55.983449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9a3ced84-68a1-446e-aae2-5eb9cbc7df23 is claimed 00:06:30.184 [2024-11-18 10:34:55.983569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:30.184 [2024-11-18 10:34:55.983585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:30.184 [2024-11-18 10:34:55.983840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:30.184 [2024-11-18 10:34:55.984075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:30.184 [2024-11-18 10:34:55.984086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:30.184 [2024-11-18 10:34:55.984255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:30.184 10:34:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:30.443 [2024-11-18 10:34:56.091432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.443 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.443 [2024-11-18 10:34:56.127313] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.444 [2024-11-18 10:34:56.127406] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ad3cd528-ce66-4ebd-b63a-8f2098471b7a' was resized: old size 131072, new size 204800 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 [2024-11-18 10:34:56.139243] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.444 [2024-11-18 10:34:56.139266] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9a3ced84-68a1-446e-aae2-5eb9cbc7df23' was resized: old size 131072, new size 204800 00:06:30.444 [2024-11-18 10:34:56.139292] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 [2024-11-18 10:34:56.231172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 [2024-11-18 10:34:56.274938] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:30.444 [2024-11-18 10:34:56.275005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:30.444 [2024-11-18 10:34:56.275018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:30.444 [2024-11-18 10:34:56.275035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:30.444 [2024-11-18 10:34:56.275129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.444 [2024-11-18 10:34:56.275160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.444 [2024-11-18 10:34:56.275187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 [2024-11-18 10:34:56.282884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:30.444 [2024-11-18 10:34:56.282940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.444 [2024-11-18 10:34:56.282961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:30.444 [2024-11-18 10:34:56.282972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.444 [2024-11-18 10:34:56.285285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.444 [2024-11-18 10:34:56.285320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:30.444 [2024-11-18 10:34:56.286945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ad3cd528-ce66-4ebd-b63a-8f2098471b7a 00:06:30.444 [2024-11-18 10:34:56.287005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ad3cd528-ce66-4ebd-b63a-8f2098471b7a is claimed 00:06:30.444 [2024-11-18 10:34:56.287106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9a3ced84-68a1-446e-aae2-5eb9cbc7df23 00:06:30.444 [2024-11-18 10:34:56.287126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9a3ced84-68a1-446e-aae2-5eb9cbc7df23 is claimed 00:06:30.444 [2024-11-18 10:34:56.287287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9a3ced84-68a1-446e-aae2-5eb9cbc7df23 (2) smaller than existing raid bdev Raid (3) 00:06:30.444 [2024-11-18 10:34:56.287314] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ad3cd528-ce66-4ebd-b63a-8f2098471b7a: File exists 00:06:30.444 [2024-11-18 10:34:56.287347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:30.444 [2024-11-18 10:34:56.287358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:30.444 pt0 00:06:30.444 [2024-11-18 10:34:56.287623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:30.444 [2024-11-18 10:34:56.287786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:30.444 [2024-11-18 10:34:56.287794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:30.444 [2024-11-18 10:34:56.287935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.444 [2024-11-18 10:34:56.303166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.444 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.703 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60030 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60030 ']' 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60030 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60030 00:06:30.704 killing process with pid 60030 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60030' 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60030 00:06:30.704 [2024-11-18 10:34:56.374582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.704 [2024-11-18 10:34:56.374634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.704 [2024-11-18 10:34:56.374669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.704 [2024-11-18 10:34:56.374677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:30.704 10:34:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60030 00:06:32.100 [2024-11-18 10:34:57.872720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.486 ************************************ 00:06:33.486 END TEST raid0_resize_superblock_test 00:06:33.486 ************************************ 00:06:33.486 10:34:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:33.486 00:06:33.486 real 0m4.764s 00:06:33.486 user 0m4.723s 00:06:33.486 sys 0m0.752s 00:06:33.486 10:34:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.486 10:34:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.486 10:34:59 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:33.486 10:34:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.486 10:34:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.486 10:34:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.486 ************************************ 00:06:33.486 START TEST raid1_resize_superblock_test 00:06:33.486 ************************************ 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60128 00:06:33.486 Process raid pid: 60128 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60128' 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60128 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60128 ']' 00:06:33.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.486 10:34:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.486 [2024-11-18 10:34:59.192937] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:33.486 [2024-11-18 10:34:59.193132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.486 [2024-11-18 10:34:59.366243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.747 [2024-11-18 10:34:59.499167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.007 [2024-11-18 10:34:59.730734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.007 [2024-11-18 10:34:59.730869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.268 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.268 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:34.268 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:34.268 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.268 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 malloc0 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.838 [2024-11-18 10:35:00.582047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:34.838 [2024-11-18 10:35:00.582686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.838 [2024-11-18 10:35:00.582851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:34.838 [2024-11-18 10:35:00.582949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.838 pt0 00:06:34.838 [2024-11-18 10:35:00.585378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.838 [2024-11-18 10:35:00.585410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.838 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 b874175b-6e30-499e-b01a-4a2c0ee83ff1 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 0d78a5ca-91c7-4f2a-96c6-8e2278c09ca9 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 d2c38797-7041-44b1-beae-0203f12d3cab 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 [2024-11-18 10:35:00.788198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0d78a5ca-91c7-4f2a-96c6-8e2278c09ca9 is claimed 00:06:35.099 [2024-11-18 10:35:00.788304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d2c38797-7041-44b1-beae-0203f12d3cab is claimed 00:06:35.099 [2024-11-18 10:35:00.788432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:35.099 [2024-11-18 10:35:00.788448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:35.099 [2024-11-18 10:35:00.788713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:35.099 [2024-11-18 10:35:00.788895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:35.099 [2024-11-18 10:35:00.788906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:35.099 [2024-11-18 10:35:00.789055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:35.099 [2024-11-18 10:35:00.904181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 [2024-11-18 10:35:00.952020] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.099 [2024-11-18 10:35:00.952092] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0d78a5ca-91c7-4f2a-96c6-8e2278c09ca9' was resized: old size 131072, new size 204800 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.099 [2024-11-18 10:35:00.963947] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.099 [2024-11-18 10:35:00.963970] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd2c38797-7041-44b1-beae-0203f12d3cab' was resized: old size 131072, new size 204800 00:06:35.099 [2024-11-18 10:35:00.963999] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.099 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 10:35:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 [2024-11-18 10:35:01.075833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 [2024-11-18 10:35:01.119574] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:35.360 [2024-11-18 10:35:01.119687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:35.360 [2024-11-18 10:35:01.119734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:35.360 [2024-11-18 10:35:01.119888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:35.360 [2024-11-18 10:35:01.120109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.360 [2024-11-18 10:35:01.120221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.360 [2024-11-18 10:35:01.120276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 [2024-11-18 10:35:01.131511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:35.360 [2024-11-18 10:35:01.131578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.360 [2024-11-18 10:35:01.131599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:35.360 [2024-11-18 10:35:01.131611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.360 [2024-11-18 10:35:01.133939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.360 [2024-11-18 10:35:01.134021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:35.360 [2024-11-18 10:35:01.135681] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0d78a5ca-91c7-4f2a-96c6-8e2278c09ca9 00:06:35.360 [2024-11-18 10:35:01.135752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0d78a5ca-91c7-4f2a-96c6-8e2278c09ca9 is claimed 00:06:35.360 [2024-11-18 10:35:01.135864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d2c38797-7041-44b1-beae-0203f12d3cab 00:06:35.360 [2024-11-18 10:35:01.135884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d2c38797-7041-44b1-beae-0203f12d3cab is claimed 00:06:35.360 [2024-11-18 10:35:01.136032] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d2c38797-7041-44b1-beae-0203f12d3cab (2) smaller than existing raid bdev Raid (3) 00:06:35.360 [2024-11-18 10:35:01.136054] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0d78a5ca-91c7-4f2a-96c6-8e2278c09ca9: File exists 00:06:35.360 [2024-11-18 10:35:01.136089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:35.360 [2024-11-18 10:35:01.136117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:35.360 [2024-11-18 10:35:01.136377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:35.360 pt0 00:06:35.360 [2024-11-18 10:35:01.136535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:35.360 [2024-11-18 10:35:01.136544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:35.360 [2024-11-18 10:35:01.136694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.360 [2024-11-18 10:35:01.159914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60128 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60128 ']' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60128 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60128 00:06:35.360 killing process with pid 60128 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60128' 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60128 00:06:35.360 [2024-11-18 10:35:01.217498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.360 [2024-11-18 10:35:01.217556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.360 [2024-11-18 10:35:01.217598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.360 [2024-11-18 10:35:01.217606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:35.360 10:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60128 00:06:37.269 [2024-11-18 10:35:02.712355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:38.210 10:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:38.210 00:06:38.210 real 0m4.763s 00:06:38.210 user 0m4.777s 00:06:38.210 sys 0m0.728s 00:06:38.210 ************************************ 00:06:38.210 END TEST raid1_resize_superblock_test 00:06:38.210 ************************************ 00:06:38.210 10:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.210 10:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.210 10:35:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:38.210 10:35:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:38.210 10:35:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:38.210 10:35:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:38.210 10:35:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:38.210 10:35:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:38.210 10:35:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.210 10:35:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.210 10:35:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:38.210 ************************************ 00:06:38.210 START TEST raid_function_test_raid0 00:06:38.210 ************************************ 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60232 00:06:38.210 Process raid pid: 60232 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60232' 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60232 00:06:38.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60232 ']' 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.210 10:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:38.210 [2024-11-18 10:35:04.060063] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:38.210 [2024-11-18 10:35:04.060252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.470 [2024-11-18 10:35:04.231433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.729 [2024-11-18 10:35:04.363519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.729 [2024-11-18 10:35:04.589648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.729 [2024-11-18 10:35:04.589681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.989 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.989 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:38.989 10:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:38.989 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.989 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.248 Base_1 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.248 Base_2 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:39.248 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.249 [2024-11-18 10:35:04.966542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:39.249 [2024-11-18 10:35:04.968570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:39.249 [2024-11-18 10:35:04.968640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:39.249 [2024-11-18 10:35:04.968651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:39.249 [2024-11-18 10:35:04.968892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:39.249 [2024-11-18 10:35:04.969034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:39.249 [2024-11-18 10:35:04.969043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:39.249 [2024-11-18 10:35:04.969196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.249 10:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.249 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:39.509 [2024-11-18 10:35:05.194183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:39.509 /dev/nbd0 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.509 1+0 records in 00:06:39.509 1+0 records out 00:06:39.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626985 s, 6.5 MB/s 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.509 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.767 { 00:06:39.767 "nbd_device": "/dev/nbd0", 00:06:39.767 "bdev_name": "raid" 00:06:39.767 } 00:06:39.767 ]' 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.767 { 00:06:39.767 "nbd_device": "/dev/nbd0", 00:06:39.767 "bdev_name": "raid" 00:06:39.767 } 00:06:39.767 ]' 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:39.767 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:39.768 4096+0 records in 00:06:39.768 4096+0 records out 00:06:39.768 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0325808 s, 64.4 MB/s 00:06:39.768 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:40.027 4096+0 records in 00:06:40.027 4096+0 records out 00:06:40.027 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.222358 s, 9.4 MB/s 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:40.027 128+0 records in 00:06:40.027 128+0 records out 00:06:40.027 65536 bytes (66 kB, 64 KiB) copied, 0.00118499 s, 55.3 MB/s 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:40.027 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:40.027 2035+0 records in 00:06:40.027 2035+0 records out 00:06:40.027 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0145786 s, 71.5 MB/s 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:40.028 456+0 records in 00:06:40.028 456+0 records out 00:06:40.028 233472 bytes (233 kB, 228 KiB) copied, 0.00360592 s, 64.7 MB/s 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.028 10:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.288 [2024-11-18 10:35:06.110742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:40.288 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60232 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60232 ']' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60232 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60232 00:06:40.547 killing process with pid 60232 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60232' 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60232 00:06:40.547 [2024-11-18 10:35:06.404205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.547 [2024-11-18 10:35:06.404321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.547 [2024-11-18 10:35:06.404369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.547 [2024-11-18 10:35:06.404385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:40.547 10:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60232 00:06:40.807 [2024-11-18 10:35:06.618342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.190 10:35:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:42.190 00:06:42.190 real 0m3.804s 00:06:42.190 user 0m4.233s 00:06:42.190 sys 0m1.045s 00:06:42.190 10:35:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.190 10:35:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:42.190 ************************************ 00:06:42.190 END TEST raid_function_test_raid0 00:06:42.190 ************************************ 00:06:42.190 10:35:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:42.190 10:35:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.190 10:35:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.190 10:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.190 ************************************ 00:06:42.190 START TEST raid_function_test_concat 00:06:42.190 ************************************ 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60364 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60364' 00:06:42.190 Process raid pid: 60364 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60364 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60364 ']' 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.190 10:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:42.190 [2024-11-18 10:35:07.925135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:42.190 [2024-11-18 10:35:07.925354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.449 [2024-11-18 10:35:08.098617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.449 [2024-11-18 10:35:08.236643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.708 [2024-11-18 10:35:08.474040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.708 [2024-11-18 10:35:08.474197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:42.968 Base_1 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.968 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.230 Base_2 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.230 [2024-11-18 10:35:08.884092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:43.230 [2024-11-18 10:35:08.886091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:43.230 [2024-11-18 10:35:08.886165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.230 [2024-11-18 10:35:08.886189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:43.230 [2024-11-18 10:35:08.886434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.230 [2024-11-18 10:35:08.886634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.230 [2024-11-18 10:35:08.886645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:43.230 [2024-11-18 10:35:08.886796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:43.230 10:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:43.491 [2024-11-18 10:35:09.131659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:43.491 /dev/nbd0 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.491 1+0 records in 00:06:43.491 1+0 records out 00:06:43.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040525 s, 10.1 MB/s 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:43.491 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.751 { 00:06:43.751 "nbd_device": "/dev/nbd0", 00:06:43.751 "bdev_name": "raid" 00:06:43.751 } 00:06:43.751 ]' 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.751 { 00:06:43.751 "nbd_device": "/dev/nbd0", 00:06:43.751 "bdev_name": "raid" 00:06:43.751 } 00:06:43.751 ]' 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:43.751 4096+0 records in 00:06:43.751 4096+0 records out 00:06:43.751 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0351437 s, 59.7 MB/s 00:06:43.751 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:44.011 4096+0 records in 00:06:44.011 4096+0 records out 00:06:44.011 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.225786 s, 9.3 MB/s 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:44.011 128+0 records in 00:06:44.011 128+0 records out 00:06:44.011 65536 bytes (66 kB, 64 KiB) copied, 0.00159438 s, 41.1 MB/s 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:44.011 2035+0 records in 00:06:44.011 2035+0 records out 00:06:44.011 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0125064 s, 83.3 MB/s 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:44.011 456+0 records in 00:06:44.011 456+0 records out 00:06:44.011 233472 bytes (233 kB, 228 KiB) copied, 0.0035433 s, 65.9 MB/s 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.011 10:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.271 [2024-11-18 10:35:10.065735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:44.271 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60364 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60364 ']' 00:06:44.530 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60364 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60364 00:06:44.531 killing process with pid 60364 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60364' 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60364 00:06:44.531 [2024-11-18 10:35:10.377103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:44.531 [2024-11-18 10:35:10.377212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.531 [2024-11-18 10:35:10.377263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:44.531 [2024-11-18 10:35:10.377276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:44.531 10:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60364 00:06:44.791 [2024-11-18 10:35:10.594129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.170 ************************************ 00:06:46.170 END TEST raid_function_test_concat 00:06:46.170 ************************************ 00:06:46.170 10:35:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:46.170 00:06:46.170 real 0m3.900s 00:06:46.170 user 0m4.423s 00:06:46.170 sys 0m1.036s 00:06:46.170 10:35:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.170 10:35:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 10:35:11 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:46.170 10:35:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.170 10:35:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.170 10:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 ************************************ 00:06:46.170 START TEST raid0_resize_test 00:06:46.170 ************************************ 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60488 00:06:46.170 Process raid pid: 60488 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60488' 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60488 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60488 ']' 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.170 10:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 [2024-11-18 10:35:11.900747] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:46.170 [2024-11-18 10:35:11.900866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.431 [2024-11-18 10:35:12.079665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.431 [2024-11-18 10:35:12.212882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.691 [2024-11-18 10:35:12.441113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.691 [2024-11-18 10:35:12.441177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 Base_1 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 Base_2 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 [2024-11-18 10:35:12.738988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.952 [2024-11-18 10:35:12.740904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.952 [2024-11-18 10:35:12.740960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.952 [2024-11-18 10:35:12.740971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.952 [2024-11-18 10:35:12.741202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:46.952 [2024-11-18 10:35:12.741328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.952 [2024-11-18 10:35:12.741342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:46.952 [2024-11-18 10:35:12.741471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 [2024-11-18 10:35:12.750950] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.952 [2024-11-18 10:35:12.750978] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:46.952 true 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 [2024-11-18 10:35:12.767094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 [2024-11-18 10:35:12.806845] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.952 [2024-11-18 10:35:12.806878] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:46.952 [2024-11-18 10:35:12.806903] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:46.952 true 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.952 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.952 [2024-11-18 10:35:12.822994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60488 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60488 ']' 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60488 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60488 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.213 killing process with pid 60488 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60488' 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60488 00:06:47.213 [2024-11-18 10:35:12.906162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.213 [2024-11-18 10:35:12.906258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.213 [2024-11-18 10:35:12.906302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.213 [2024-11-18 10:35:12.906311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:47.213 10:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60488 00:06:47.213 [2024-11-18 10:35:12.923854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.596 10:35:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:48.596 00:06:48.596 real 0m2.262s 00:06:48.596 user 0m2.281s 00:06:48.596 sys 0m0.441s 00:06:48.596 10:35:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.596 10:35:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.596 ************************************ 00:06:48.596 END TEST raid0_resize_test 00:06:48.596 ************************************ 00:06:48.596 10:35:14 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:48.596 10:35:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.596 10:35:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.596 10:35:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.596 ************************************ 00:06:48.596 START TEST raid1_resize_test 00:06:48.596 ************************************ 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60549 00:06:48.596 Process raid pid: 60549 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60549' 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60549 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60549 ']' 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.596 10:35:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.596 [2024-11-18 10:35:14.228488] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.596 [2024-11-18 10:35:14.228597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.596 [2024-11-18 10:35:14.406361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.863 [2024-11-18 10:35:14.538020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.122 [2024-11-18 10:35:14.775783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.122 [2024-11-18 10:35:14.775827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 Base_1 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 Base_2 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 [2024-11-18 10:35:15.059402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.383 [2024-11-18 10:35:15.061315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.383 [2024-11-18 10:35:15.061378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.383 [2024-11-18 10:35:15.061388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:49.383 [2024-11-18 10:35:15.061613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.383 [2024-11-18 10:35:15.061738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.383 [2024-11-18 10:35:15.061753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.383 [2024-11-18 10:35:15.061877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 [2024-11-18 10:35:15.071353] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.383 [2024-11-18 10:35:15.071386] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:49.383 true 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 [2024-11-18 10:35:15.087471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 [2024-11-18 10:35:15.123258] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.383 [2024-11-18 10:35:15.123280] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:49.383 [2024-11-18 10:35:15.123301] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:49.383 true 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.383 [2024-11-18 10:35:15.139399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60549 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60549 ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60549 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60549 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.383 killing process with pid 60549 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60549' 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60549 00:06:49.383 [2024-11-18 10:35:15.217879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.383 [2024-11-18 10:35:15.217953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.383 10:35:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60549 00:06:49.383 [2024-11-18 10:35:15.218418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.383 [2024-11-18 10:35:15.218441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:49.383 [2024-11-18 10:35:15.236107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.763 10:35:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:50.763 00:06:50.763 real 0m2.243s 00:06:50.763 user 0m2.255s 00:06:50.763 sys 0m0.427s 00:06:50.763 10:35:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.763 10:35:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.763 ************************************ 00:06:50.763 END TEST raid1_resize_test 00:06:50.763 ************************************ 00:06:50.763 10:35:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:50.763 10:35:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:50.763 10:35:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:50.763 10:35:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:50.763 10:35:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.763 10:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.763 ************************************ 00:06:50.763 START TEST raid_state_function_test 00:06:50.763 ************************************ 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.763 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60606 00:06:50.764 Process raid pid: 60606 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60606' 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60606 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60606 ']' 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.764 10:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.764 [2024-11-18 10:35:16.540560] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:50.764 [2024-11-18 10:35:16.540669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.023 [2024-11-18 10:35:16.703564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.023 [2024-11-18 10:35:16.827640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.283 [2024-11-18 10:35:17.060236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.283 [2024-11-18 10:35:17.060279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.543 [2024-11-18 10:35:17.381443] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.543 [2024-11-18 10:35:17.381504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.543 [2024-11-18 10:35:17.381514] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.543 [2024-11-18 10:35:17.381524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.543 "name": "Existed_Raid", 00:06:51.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.543 "strip_size_kb": 64, 00:06:51.543 "state": "configuring", 00:06:51.543 "raid_level": "raid0", 00:06:51.543 "superblock": false, 00:06:51.543 "num_base_bdevs": 2, 00:06:51.543 "num_base_bdevs_discovered": 0, 00:06:51.543 "num_base_bdevs_operational": 2, 00:06:51.543 "base_bdevs_list": [ 00:06:51.543 { 00:06:51.543 "name": "BaseBdev1", 00:06:51.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.543 "is_configured": false, 00:06:51.543 "data_offset": 0, 00:06:51.543 "data_size": 0 00:06:51.543 }, 00:06:51.543 { 00:06:51.543 "name": "BaseBdev2", 00:06:51.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.543 "is_configured": false, 00:06:51.543 "data_offset": 0, 00:06:51.543 "data_size": 0 00:06:51.543 } 00:06:51.543 ] 00:06:51.543 }' 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.543 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 [2024-11-18 10:35:17.800734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.112 [2024-11-18 10:35:17.800774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 [2024-11-18 10:35:17.812707] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:52.112 [2024-11-18 10:35:17.812748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:52.112 [2024-11-18 10:35:17.812757] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.112 [2024-11-18 10:35:17.812770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 [2024-11-18 10:35:17.866643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.112 BaseBdev1 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 [ 00:06:52.112 { 00:06:52.112 "name": "BaseBdev1", 00:06:52.112 "aliases": [ 00:06:52.112 "7913ccc8-a281-4e2d-81f1-c45cf5f8580c" 00:06:52.112 ], 00:06:52.112 "product_name": "Malloc disk", 00:06:52.112 "block_size": 512, 00:06:52.112 "num_blocks": 65536, 00:06:52.112 "uuid": "7913ccc8-a281-4e2d-81f1-c45cf5f8580c", 00:06:52.112 "assigned_rate_limits": { 00:06:52.112 "rw_ios_per_sec": 0, 00:06:52.112 "rw_mbytes_per_sec": 0, 00:06:52.112 "r_mbytes_per_sec": 0, 00:06:52.112 "w_mbytes_per_sec": 0 00:06:52.112 }, 00:06:52.112 "claimed": true, 00:06:52.112 "claim_type": "exclusive_write", 00:06:52.112 "zoned": false, 00:06:52.112 "supported_io_types": { 00:06:52.112 "read": true, 00:06:52.112 "write": true, 00:06:52.112 "unmap": true, 00:06:52.112 "flush": true, 00:06:52.112 "reset": true, 00:06:52.112 "nvme_admin": false, 00:06:52.112 "nvme_io": false, 00:06:52.112 "nvme_io_md": false, 00:06:52.112 "write_zeroes": true, 00:06:52.112 "zcopy": true, 00:06:52.112 "get_zone_info": false, 00:06:52.112 "zone_management": false, 00:06:52.112 "zone_append": false, 00:06:52.112 "compare": false, 00:06:52.112 "compare_and_write": false, 00:06:52.112 "abort": true, 00:06:52.112 "seek_hole": false, 00:06:52.112 "seek_data": false, 00:06:52.112 "copy": true, 00:06:52.112 "nvme_iov_md": false 00:06:52.112 }, 00:06:52.112 "memory_domains": [ 00:06:52.112 { 00:06:52.112 "dma_device_id": "system", 00:06:52.112 "dma_device_type": 1 00:06:52.112 }, 00:06:52.112 { 00:06:52.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.112 "dma_device_type": 2 00:06:52.112 } 00:06:52.112 ], 00:06:52.112 "driver_specific": {} 00:06:52.112 } 00:06:52.112 ] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.112 "name": "Existed_Raid", 00:06:52.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.112 "strip_size_kb": 64, 00:06:52.112 "state": "configuring", 00:06:52.112 "raid_level": "raid0", 00:06:52.112 "superblock": false, 00:06:52.112 "num_base_bdevs": 2, 00:06:52.112 "num_base_bdevs_discovered": 1, 00:06:52.112 "num_base_bdevs_operational": 2, 00:06:52.112 "base_bdevs_list": [ 00:06:52.112 { 00:06:52.112 "name": "BaseBdev1", 00:06:52.112 "uuid": "7913ccc8-a281-4e2d-81f1-c45cf5f8580c", 00:06:52.112 "is_configured": true, 00:06:52.112 "data_offset": 0, 00:06:52.112 "data_size": 65536 00:06:52.112 }, 00:06:52.112 { 00:06:52.112 "name": "BaseBdev2", 00:06:52.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.112 "is_configured": false, 00:06:52.112 "data_offset": 0, 00:06:52.112 "data_size": 0 00:06:52.112 } 00:06:52.112 ] 00:06:52.112 }' 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.112 10:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.682 [2024-11-18 10:35:18.301949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.682 [2024-11-18 10:35:18.301994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.682 [2024-11-18 10:35:18.313990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.682 [2024-11-18 10:35:18.316014] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.682 [2024-11-18 10:35:18.316054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.682 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.683 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.683 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.683 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.683 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.683 "name": "Existed_Raid", 00:06:52.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.683 "strip_size_kb": 64, 00:06:52.683 "state": "configuring", 00:06:52.683 "raid_level": "raid0", 00:06:52.683 "superblock": false, 00:06:52.683 "num_base_bdevs": 2, 00:06:52.683 "num_base_bdevs_discovered": 1, 00:06:52.683 "num_base_bdevs_operational": 2, 00:06:52.683 "base_bdevs_list": [ 00:06:52.683 { 00:06:52.683 "name": "BaseBdev1", 00:06:52.683 "uuid": "7913ccc8-a281-4e2d-81f1-c45cf5f8580c", 00:06:52.683 "is_configured": true, 00:06:52.683 "data_offset": 0, 00:06:52.683 "data_size": 65536 00:06:52.683 }, 00:06:52.683 { 00:06:52.683 "name": "BaseBdev2", 00:06:52.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.683 "is_configured": false, 00:06:52.683 "data_offset": 0, 00:06:52.683 "data_size": 0 00:06:52.683 } 00:06:52.683 ] 00:06:52.683 }' 00:06:52.683 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.683 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.943 [2024-11-18 10:35:18.774200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:52.943 [2024-11-18 10:35:18.774244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:52.943 [2024-11-18 10:35:18.774253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.943 [2024-11-18 10:35:18.774541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.943 [2024-11-18 10:35:18.774722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:52.943 [2024-11-18 10:35:18.774742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:52.943 [2024-11-18 10:35:18.775018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.943 BaseBdev2 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.943 [ 00:06:52.943 { 00:06:52.943 "name": "BaseBdev2", 00:06:52.943 "aliases": [ 00:06:52.943 "b460e24c-62f8-4ce0-b9ae-10ca3d017e97" 00:06:52.943 ], 00:06:52.943 "product_name": "Malloc disk", 00:06:52.943 "block_size": 512, 00:06:52.943 "num_blocks": 65536, 00:06:52.943 "uuid": "b460e24c-62f8-4ce0-b9ae-10ca3d017e97", 00:06:52.943 "assigned_rate_limits": { 00:06:52.943 "rw_ios_per_sec": 0, 00:06:52.943 "rw_mbytes_per_sec": 0, 00:06:52.943 "r_mbytes_per_sec": 0, 00:06:52.943 "w_mbytes_per_sec": 0 00:06:52.943 }, 00:06:52.943 "claimed": true, 00:06:52.943 "claim_type": "exclusive_write", 00:06:52.943 "zoned": false, 00:06:52.943 "supported_io_types": { 00:06:52.943 "read": true, 00:06:52.943 "write": true, 00:06:52.943 "unmap": true, 00:06:52.943 "flush": true, 00:06:52.943 "reset": true, 00:06:52.943 "nvme_admin": false, 00:06:52.943 "nvme_io": false, 00:06:52.943 "nvme_io_md": false, 00:06:52.943 "write_zeroes": true, 00:06:52.943 "zcopy": true, 00:06:52.943 "get_zone_info": false, 00:06:52.943 "zone_management": false, 00:06:52.943 "zone_append": false, 00:06:52.943 "compare": false, 00:06:52.943 "compare_and_write": false, 00:06:52.943 "abort": true, 00:06:52.943 "seek_hole": false, 00:06:52.943 "seek_data": false, 00:06:52.943 "copy": true, 00:06:52.943 "nvme_iov_md": false 00:06:52.943 }, 00:06:52.943 "memory_domains": [ 00:06:52.943 { 00:06:52.943 "dma_device_id": "system", 00:06:52.943 "dma_device_type": 1 00:06:52.943 }, 00:06:52.943 { 00:06:52.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.943 "dma_device_type": 2 00:06:52.943 } 00:06:52.943 ], 00:06:52.943 "driver_specific": {} 00:06:52.943 } 00:06:52.943 ] 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:52.943 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.944 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.203 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.203 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.203 "name": "Existed_Raid", 00:06:53.203 "uuid": "baeb1823-7d21-4aa2-8cb8-35b78b28addb", 00:06:53.203 "strip_size_kb": 64, 00:06:53.203 "state": "online", 00:06:53.203 "raid_level": "raid0", 00:06:53.203 "superblock": false, 00:06:53.203 "num_base_bdevs": 2, 00:06:53.203 "num_base_bdevs_discovered": 2, 00:06:53.203 "num_base_bdevs_operational": 2, 00:06:53.203 "base_bdevs_list": [ 00:06:53.203 { 00:06:53.203 "name": "BaseBdev1", 00:06:53.203 "uuid": "7913ccc8-a281-4e2d-81f1-c45cf5f8580c", 00:06:53.203 "is_configured": true, 00:06:53.203 "data_offset": 0, 00:06:53.203 "data_size": 65536 00:06:53.203 }, 00:06:53.203 { 00:06:53.203 "name": "BaseBdev2", 00:06:53.203 "uuid": "b460e24c-62f8-4ce0-b9ae-10ca3d017e97", 00:06:53.203 "is_configured": true, 00:06:53.203 "data_offset": 0, 00:06:53.203 "data_size": 65536 00:06:53.203 } 00:06:53.203 ] 00:06:53.203 }' 00:06:53.203 10:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.203 10:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.468 [2024-11-18 10:35:19.249600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.468 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:53.468 "name": "Existed_Raid", 00:06:53.468 "aliases": [ 00:06:53.468 "baeb1823-7d21-4aa2-8cb8-35b78b28addb" 00:06:53.468 ], 00:06:53.468 "product_name": "Raid Volume", 00:06:53.468 "block_size": 512, 00:06:53.468 "num_blocks": 131072, 00:06:53.468 "uuid": "baeb1823-7d21-4aa2-8cb8-35b78b28addb", 00:06:53.468 "assigned_rate_limits": { 00:06:53.468 "rw_ios_per_sec": 0, 00:06:53.468 "rw_mbytes_per_sec": 0, 00:06:53.468 "r_mbytes_per_sec": 0, 00:06:53.468 "w_mbytes_per_sec": 0 00:06:53.468 }, 00:06:53.468 "claimed": false, 00:06:53.468 "zoned": false, 00:06:53.468 "supported_io_types": { 00:06:53.468 "read": true, 00:06:53.468 "write": true, 00:06:53.468 "unmap": true, 00:06:53.468 "flush": true, 00:06:53.468 "reset": true, 00:06:53.468 "nvme_admin": false, 00:06:53.468 "nvme_io": false, 00:06:53.468 "nvme_io_md": false, 00:06:53.468 "write_zeroes": true, 00:06:53.468 "zcopy": false, 00:06:53.468 "get_zone_info": false, 00:06:53.468 "zone_management": false, 00:06:53.468 "zone_append": false, 00:06:53.468 "compare": false, 00:06:53.468 "compare_and_write": false, 00:06:53.468 "abort": false, 00:06:53.468 "seek_hole": false, 00:06:53.468 "seek_data": false, 00:06:53.468 "copy": false, 00:06:53.468 "nvme_iov_md": false 00:06:53.468 }, 00:06:53.468 "memory_domains": [ 00:06:53.468 { 00:06:53.468 "dma_device_id": "system", 00:06:53.468 "dma_device_type": 1 00:06:53.468 }, 00:06:53.468 { 00:06:53.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.468 "dma_device_type": 2 00:06:53.468 }, 00:06:53.468 { 00:06:53.468 "dma_device_id": "system", 00:06:53.468 "dma_device_type": 1 00:06:53.468 }, 00:06:53.468 { 00:06:53.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.468 "dma_device_type": 2 00:06:53.468 } 00:06:53.468 ], 00:06:53.468 "driver_specific": { 00:06:53.469 "raid": { 00:06:53.469 "uuid": "baeb1823-7d21-4aa2-8cb8-35b78b28addb", 00:06:53.469 "strip_size_kb": 64, 00:06:53.469 "state": "online", 00:06:53.469 "raid_level": "raid0", 00:06:53.469 "superblock": false, 00:06:53.469 "num_base_bdevs": 2, 00:06:53.469 "num_base_bdevs_discovered": 2, 00:06:53.469 "num_base_bdevs_operational": 2, 00:06:53.469 "base_bdevs_list": [ 00:06:53.469 { 00:06:53.469 "name": "BaseBdev1", 00:06:53.469 "uuid": "7913ccc8-a281-4e2d-81f1-c45cf5f8580c", 00:06:53.469 "is_configured": true, 00:06:53.469 "data_offset": 0, 00:06:53.469 "data_size": 65536 00:06:53.469 }, 00:06:53.469 { 00:06:53.469 "name": "BaseBdev2", 00:06:53.469 "uuid": "b460e24c-62f8-4ce0-b9ae-10ca3d017e97", 00:06:53.469 "is_configured": true, 00:06:53.469 "data_offset": 0, 00:06:53.469 "data_size": 65536 00:06:53.469 } 00:06:53.469 ] 00:06:53.469 } 00:06:53.469 } 00:06:53.469 }' 00:06:53.469 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:53.469 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:53.469 BaseBdev2' 00:06:53.469 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 [2024-11-18 10:35:19.504981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:53.732 [2024-11-18 10:35:19.505016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.732 [2024-11-18 10:35:19.505064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.992 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.992 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.992 "name": "Existed_Raid", 00:06:53.992 "uuid": "baeb1823-7d21-4aa2-8cb8-35b78b28addb", 00:06:53.992 "strip_size_kb": 64, 00:06:53.992 "state": "offline", 00:06:53.992 "raid_level": "raid0", 00:06:53.992 "superblock": false, 00:06:53.992 "num_base_bdevs": 2, 00:06:53.992 "num_base_bdevs_discovered": 1, 00:06:53.992 "num_base_bdevs_operational": 1, 00:06:53.992 "base_bdevs_list": [ 00:06:53.992 { 00:06:53.992 "name": null, 00:06:53.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.992 "is_configured": false, 00:06:53.992 "data_offset": 0, 00:06:53.992 "data_size": 65536 00:06:53.992 }, 00:06:53.992 { 00:06:53.992 "name": "BaseBdev2", 00:06:53.992 "uuid": "b460e24c-62f8-4ce0-b9ae-10ca3d017e97", 00:06:53.992 "is_configured": true, 00:06:53.992 "data_offset": 0, 00:06:53.992 "data_size": 65536 00:06:53.992 } 00:06:53.992 ] 00:06:53.992 }' 00:06:53.992 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.992 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:54.252 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:54.252 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.252 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.252 10:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:54.252 10:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.252 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:54.252 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:54.252 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:54.252 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.252 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 [2024-11-18 10:35:20.038152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:54.252 [2024-11-18 10:35:20.038255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60606 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60606 ']' 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60606 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60606 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60606' 00:06:54.535 killing process with pid 60606 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60606 00:06:54.535 [2024-11-18 10:35:20.220620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.535 10:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60606 00:06:54.535 [2024-11-18 10:35:20.237693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.494 10:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:55.494 00:06:55.494 real 0m4.919s 00:06:55.494 user 0m6.966s 00:06:55.494 sys 0m0.818s 00:06:55.494 10:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.494 10:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.494 ************************************ 00:06:55.494 END TEST raid_state_function_test 00:06:55.494 ************************************ 00:06:55.753 10:35:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:55.753 10:35:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:55.753 10:35:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.753 10:35:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 ************************************ 00:06:55.754 START TEST raid_state_function_test_sb 00:06:55.754 ************************************ 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:55.754 Process raid pid: 60854 00:06:55.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60854 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60854' 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60854 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60854 ']' 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.754 10:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.754 [2024-11-18 10:35:21.535424] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:55.754 [2024-11-18 10:35:21.535596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.013 [2024-11-18 10:35:21.711346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.013 [2024-11-18 10:35:21.845824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.273 [2024-11-18 10:35:22.080036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.273 [2024-11-18 10:35:22.080145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.533 [2024-11-18 10:35:22.360000] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.533 [2024-11-18 10:35:22.360144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.533 [2024-11-18 10:35:22.360206] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.533 [2024-11-18 10:35:22.360240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.533 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.533 "name": "Existed_Raid", 00:06:56.533 "uuid": "4c5d8c52-8e47-4648-9b60-714be1cc5af1", 00:06:56.534 "strip_size_kb": 64, 00:06:56.534 "state": "configuring", 00:06:56.534 "raid_level": "raid0", 00:06:56.534 "superblock": true, 00:06:56.534 "num_base_bdevs": 2, 00:06:56.534 "num_base_bdevs_discovered": 0, 00:06:56.534 "num_base_bdevs_operational": 2, 00:06:56.534 "base_bdevs_list": [ 00:06:56.534 { 00:06:56.534 "name": "BaseBdev1", 00:06:56.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.534 "is_configured": false, 00:06:56.534 "data_offset": 0, 00:06:56.534 "data_size": 0 00:06:56.534 }, 00:06:56.534 { 00:06:56.534 "name": "BaseBdev2", 00:06:56.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.534 "is_configured": false, 00:06:56.534 "data_offset": 0, 00:06:56.534 "data_size": 0 00:06:56.534 } 00:06:56.534 ] 00:06:56.534 }' 00:06:56.534 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.534 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 [2024-11-18 10:35:22.755253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.101 [2024-11-18 10:35:22.755355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 [2024-11-18 10:35:22.767244] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:57.101 [2024-11-18 10:35:22.767326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:57.101 [2024-11-18 10:35:22.767353] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.101 [2024-11-18 10:35:22.767380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 [2024-11-18 10:35:22.818297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.101 BaseBdev1 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:57.101 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.102 [ 00:06:57.102 { 00:06:57.102 "name": "BaseBdev1", 00:06:57.102 "aliases": [ 00:06:57.102 "b857a8d1-7d38-4444-b31d-2fdb6b678a7a" 00:06:57.102 ], 00:06:57.102 "product_name": "Malloc disk", 00:06:57.102 "block_size": 512, 00:06:57.102 "num_blocks": 65536, 00:06:57.102 "uuid": "b857a8d1-7d38-4444-b31d-2fdb6b678a7a", 00:06:57.102 "assigned_rate_limits": { 00:06:57.102 "rw_ios_per_sec": 0, 00:06:57.102 "rw_mbytes_per_sec": 0, 00:06:57.102 "r_mbytes_per_sec": 0, 00:06:57.102 "w_mbytes_per_sec": 0 00:06:57.102 }, 00:06:57.102 "claimed": true, 00:06:57.102 "claim_type": "exclusive_write", 00:06:57.102 "zoned": false, 00:06:57.102 "supported_io_types": { 00:06:57.102 "read": true, 00:06:57.102 "write": true, 00:06:57.102 "unmap": true, 00:06:57.102 "flush": true, 00:06:57.102 "reset": true, 00:06:57.102 "nvme_admin": false, 00:06:57.102 "nvme_io": false, 00:06:57.102 "nvme_io_md": false, 00:06:57.102 "write_zeroes": true, 00:06:57.102 "zcopy": true, 00:06:57.102 "get_zone_info": false, 00:06:57.102 "zone_management": false, 00:06:57.102 "zone_append": false, 00:06:57.102 "compare": false, 00:06:57.102 "compare_and_write": false, 00:06:57.102 "abort": true, 00:06:57.102 "seek_hole": false, 00:06:57.102 "seek_data": false, 00:06:57.102 "copy": true, 00:06:57.102 "nvme_iov_md": false 00:06:57.102 }, 00:06:57.102 "memory_domains": [ 00:06:57.102 { 00:06:57.102 "dma_device_id": "system", 00:06:57.102 "dma_device_type": 1 00:06:57.102 }, 00:06:57.102 { 00:06:57.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.102 "dma_device_type": 2 00:06:57.102 } 00:06:57.102 ], 00:06:57.102 "driver_specific": {} 00:06:57.102 } 00:06:57.102 ] 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.102 "name": "Existed_Raid", 00:06:57.102 "uuid": "84355969-bcaa-4e3e-9450-97472b5d0c39", 00:06:57.102 "strip_size_kb": 64, 00:06:57.102 "state": "configuring", 00:06:57.102 "raid_level": "raid0", 00:06:57.102 "superblock": true, 00:06:57.102 "num_base_bdevs": 2, 00:06:57.102 "num_base_bdevs_discovered": 1, 00:06:57.102 "num_base_bdevs_operational": 2, 00:06:57.102 "base_bdevs_list": [ 00:06:57.102 { 00:06:57.102 "name": "BaseBdev1", 00:06:57.102 "uuid": "b857a8d1-7d38-4444-b31d-2fdb6b678a7a", 00:06:57.102 "is_configured": true, 00:06:57.102 "data_offset": 2048, 00:06:57.102 "data_size": 63488 00:06:57.102 }, 00:06:57.102 { 00:06:57.102 "name": "BaseBdev2", 00:06:57.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.102 "is_configured": false, 00:06:57.102 "data_offset": 0, 00:06:57.102 "data_size": 0 00:06:57.102 } 00:06:57.102 ] 00:06:57.102 }' 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.102 10:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.669 [2024-11-18 10:35:23.309460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.669 [2024-11-18 10:35:23.309556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.669 [2024-11-18 10:35:23.321505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.669 [2024-11-18 10:35:23.323441] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.669 [2024-11-18 10:35:23.323484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.669 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.670 "name": "Existed_Raid", 00:06:57.670 "uuid": "4db6c2d3-5809-4471-86f1-32ee93d21e60", 00:06:57.670 "strip_size_kb": 64, 00:06:57.670 "state": "configuring", 00:06:57.670 "raid_level": "raid0", 00:06:57.670 "superblock": true, 00:06:57.670 "num_base_bdevs": 2, 00:06:57.670 "num_base_bdevs_discovered": 1, 00:06:57.670 "num_base_bdevs_operational": 2, 00:06:57.670 "base_bdevs_list": [ 00:06:57.670 { 00:06:57.670 "name": "BaseBdev1", 00:06:57.670 "uuid": "b857a8d1-7d38-4444-b31d-2fdb6b678a7a", 00:06:57.670 "is_configured": true, 00:06:57.670 "data_offset": 2048, 00:06:57.670 "data_size": 63488 00:06:57.670 }, 00:06:57.670 { 00:06:57.670 "name": "BaseBdev2", 00:06:57.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.670 "is_configured": false, 00:06:57.670 "data_offset": 0, 00:06:57.670 "data_size": 0 00:06:57.670 } 00:06:57.670 ] 00:06:57.670 }' 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.670 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.929 [2024-11-18 10:35:23.704042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:57.929 [2024-11-18 10:35:23.704330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:57.929 [2024-11-18 10:35:23.704347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.929 [2024-11-18 10:35:23.704627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.929 [2024-11-18 10:35:23.704786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:57.929 [2024-11-18 10:35:23.704800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:57.929 BaseBdev2 00:06:57.929 [2024-11-18 10:35:23.704948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.929 [ 00:06:57.929 { 00:06:57.929 "name": "BaseBdev2", 00:06:57.929 "aliases": [ 00:06:57.929 "33b975ec-0eb1-42c0-8f84-45fc56fbf8fd" 00:06:57.929 ], 00:06:57.929 "product_name": "Malloc disk", 00:06:57.929 "block_size": 512, 00:06:57.929 "num_blocks": 65536, 00:06:57.929 "uuid": "33b975ec-0eb1-42c0-8f84-45fc56fbf8fd", 00:06:57.929 "assigned_rate_limits": { 00:06:57.929 "rw_ios_per_sec": 0, 00:06:57.929 "rw_mbytes_per_sec": 0, 00:06:57.929 "r_mbytes_per_sec": 0, 00:06:57.929 "w_mbytes_per_sec": 0 00:06:57.929 }, 00:06:57.929 "claimed": true, 00:06:57.929 "claim_type": "exclusive_write", 00:06:57.929 "zoned": false, 00:06:57.929 "supported_io_types": { 00:06:57.929 "read": true, 00:06:57.929 "write": true, 00:06:57.929 "unmap": true, 00:06:57.929 "flush": true, 00:06:57.929 "reset": true, 00:06:57.929 "nvme_admin": false, 00:06:57.929 "nvme_io": false, 00:06:57.929 "nvme_io_md": false, 00:06:57.929 "write_zeroes": true, 00:06:57.929 "zcopy": true, 00:06:57.929 "get_zone_info": false, 00:06:57.929 "zone_management": false, 00:06:57.929 "zone_append": false, 00:06:57.929 "compare": false, 00:06:57.929 "compare_and_write": false, 00:06:57.929 "abort": true, 00:06:57.929 "seek_hole": false, 00:06:57.929 "seek_data": false, 00:06:57.929 "copy": true, 00:06:57.929 "nvme_iov_md": false 00:06:57.929 }, 00:06:57.929 "memory_domains": [ 00:06:57.929 { 00:06:57.929 "dma_device_id": "system", 00:06:57.929 "dma_device_type": 1 00:06:57.929 }, 00:06:57.929 { 00:06:57.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.929 "dma_device_type": 2 00:06:57.929 } 00:06:57.929 ], 00:06:57.929 "driver_specific": {} 00:06:57.929 } 00:06:57.929 ] 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:57.929 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.930 "name": "Existed_Raid", 00:06:57.930 "uuid": "4db6c2d3-5809-4471-86f1-32ee93d21e60", 00:06:57.930 "strip_size_kb": 64, 00:06:57.930 "state": "online", 00:06:57.930 "raid_level": "raid0", 00:06:57.930 "superblock": true, 00:06:57.930 "num_base_bdevs": 2, 00:06:57.930 "num_base_bdevs_discovered": 2, 00:06:57.930 "num_base_bdevs_operational": 2, 00:06:57.930 "base_bdevs_list": [ 00:06:57.930 { 00:06:57.930 "name": "BaseBdev1", 00:06:57.930 "uuid": "b857a8d1-7d38-4444-b31d-2fdb6b678a7a", 00:06:57.930 "is_configured": true, 00:06:57.930 "data_offset": 2048, 00:06:57.930 "data_size": 63488 00:06:57.930 }, 00:06:57.930 { 00:06:57.930 "name": "BaseBdev2", 00:06:57.930 "uuid": "33b975ec-0eb1-42c0-8f84-45fc56fbf8fd", 00:06:57.930 "is_configured": true, 00:06:57.930 "data_offset": 2048, 00:06:57.930 "data_size": 63488 00:06:57.930 } 00:06:57.930 ] 00:06:57.930 }' 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.930 10:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 [2024-11-18 10:35:24.143608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.499 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.499 "name": "Existed_Raid", 00:06:58.499 "aliases": [ 00:06:58.499 "4db6c2d3-5809-4471-86f1-32ee93d21e60" 00:06:58.499 ], 00:06:58.499 "product_name": "Raid Volume", 00:06:58.499 "block_size": 512, 00:06:58.499 "num_blocks": 126976, 00:06:58.499 "uuid": "4db6c2d3-5809-4471-86f1-32ee93d21e60", 00:06:58.499 "assigned_rate_limits": { 00:06:58.499 "rw_ios_per_sec": 0, 00:06:58.499 "rw_mbytes_per_sec": 0, 00:06:58.499 "r_mbytes_per_sec": 0, 00:06:58.500 "w_mbytes_per_sec": 0 00:06:58.500 }, 00:06:58.500 "claimed": false, 00:06:58.500 "zoned": false, 00:06:58.500 "supported_io_types": { 00:06:58.500 "read": true, 00:06:58.500 "write": true, 00:06:58.500 "unmap": true, 00:06:58.500 "flush": true, 00:06:58.500 "reset": true, 00:06:58.500 "nvme_admin": false, 00:06:58.500 "nvme_io": false, 00:06:58.500 "nvme_io_md": false, 00:06:58.500 "write_zeroes": true, 00:06:58.500 "zcopy": false, 00:06:58.500 "get_zone_info": false, 00:06:58.500 "zone_management": false, 00:06:58.500 "zone_append": false, 00:06:58.500 "compare": false, 00:06:58.500 "compare_and_write": false, 00:06:58.500 "abort": false, 00:06:58.500 "seek_hole": false, 00:06:58.500 "seek_data": false, 00:06:58.500 "copy": false, 00:06:58.500 "nvme_iov_md": false 00:06:58.500 }, 00:06:58.500 "memory_domains": [ 00:06:58.500 { 00:06:58.500 "dma_device_id": "system", 00:06:58.500 "dma_device_type": 1 00:06:58.500 }, 00:06:58.500 { 00:06:58.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.500 "dma_device_type": 2 00:06:58.500 }, 00:06:58.500 { 00:06:58.500 "dma_device_id": "system", 00:06:58.500 "dma_device_type": 1 00:06:58.500 }, 00:06:58.500 { 00:06:58.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.500 "dma_device_type": 2 00:06:58.500 } 00:06:58.500 ], 00:06:58.500 "driver_specific": { 00:06:58.500 "raid": { 00:06:58.500 "uuid": "4db6c2d3-5809-4471-86f1-32ee93d21e60", 00:06:58.500 "strip_size_kb": 64, 00:06:58.500 "state": "online", 00:06:58.500 "raid_level": "raid0", 00:06:58.500 "superblock": true, 00:06:58.500 "num_base_bdevs": 2, 00:06:58.500 "num_base_bdevs_discovered": 2, 00:06:58.500 "num_base_bdevs_operational": 2, 00:06:58.500 "base_bdevs_list": [ 00:06:58.500 { 00:06:58.500 "name": "BaseBdev1", 00:06:58.500 "uuid": "b857a8d1-7d38-4444-b31d-2fdb6b678a7a", 00:06:58.500 "is_configured": true, 00:06:58.500 "data_offset": 2048, 00:06:58.500 "data_size": 63488 00:06:58.500 }, 00:06:58.500 { 00:06:58.500 "name": "BaseBdev2", 00:06:58.500 "uuid": "33b975ec-0eb1-42c0-8f84-45fc56fbf8fd", 00:06:58.500 "is_configured": true, 00:06:58.500 "data_offset": 2048, 00:06:58.500 "data_size": 63488 00:06:58.500 } 00:06:58.500 ] 00:06:58.500 } 00:06:58.500 } 00:06:58.500 }' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:58.500 BaseBdev2' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.500 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.500 [2024-11-18 10:35:24.343030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:58.500 [2024-11-18 10:35:24.343062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.500 [2024-11-18 10:35:24.343107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.760 "name": "Existed_Raid", 00:06:58.760 "uuid": "4db6c2d3-5809-4471-86f1-32ee93d21e60", 00:06:58.760 "strip_size_kb": 64, 00:06:58.760 "state": "offline", 00:06:58.760 "raid_level": "raid0", 00:06:58.760 "superblock": true, 00:06:58.760 "num_base_bdevs": 2, 00:06:58.760 "num_base_bdevs_discovered": 1, 00:06:58.760 "num_base_bdevs_operational": 1, 00:06:58.760 "base_bdevs_list": [ 00:06:58.760 { 00:06:58.760 "name": null, 00:06:58.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.760 "is_configured": false, 00:06:58.760 "data_offset": 0, 00:06:58.760 "data_size": 63488 00:06:58.760 }, 00:06:58.760 { 00:06:58.760 "name": "BaseBdev2", 00:06:58.760 "uuid": "33b975ec-0eb1-42c0-8f84-45fc56fbf8fd", 00:06:58.760 "is_configured": true, 00:06:58.760 "data_offset": 2048, 00:06:58.760 "data_size": 63488 00:06:58.760 } 00:06:58.760 ] 00:06:58.760 }' 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.760 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.019 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.279 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:59.279 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:59.279 10:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:59.279 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.279 10:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.279 [2024-11-18 10:35:24.919036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:59.279 [2024-11-18 10:35:24.919100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60854 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60854 ']' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60854 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60854 00:06:59.279 killing process with pid 60854 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60854' 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60854 00:06:59.279 [2024-11-18 10:35:25.103654] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.279 10:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60854 00:06:59.279 [2024-11-18 10:35:25.120256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.657 ************************************ 00:07:00.657 END TEST raid_state_function_test_sb 00:07:00.657 ************************************ 00:07:00.657 10:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:00.657 00:07:00.657 real 0m4.810s 00:07:00.657 user 0m6.724s 00:07:00.657 sys 0m0.863s 00:07:00.657 10:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.657 10:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.657 10:35:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:00.657 10:35:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:00.657 10:35:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.657 10:35:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.658 ************************************ 00:07:00.658 START TEST raid_superblock_test 00:07:00.658 ************************************ 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61106 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61106 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:00.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61106 ']' 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.658 10:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.658 [2024-11-18 10:35:26.410951] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:00.658 [2024-11-18 10:35:26.411077] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61106 ] 00:07:00.917 [2024-11-18 10:35:26.584548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.917 [2024-11-18 10:35:26.714364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.176 [2024-11-18 10:35:26.939504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.176 [2024-11-18 10:35:26.939557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.435 malloc1 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.435 [2024-11-18 10:35:27.283144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:01.435 [2024-11-18 10:35:27.283296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.435 [2024-11-18 10:35:27.283339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:01.435 [2024-11-18 10:35:27.283369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.435 [2024-11-18 10:35:27.285652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.435 [2024-11-18 10:35:27.285723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:01.435 pt1 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.435 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 malloc2 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 [2024-11-18 10:35:27.348011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:01.694 [2024-11-18 10:35:27.348069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.694 [2024-11-18 10:35:27.348094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:01.694 [2024-11-18 10:35:27.348114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.694 [2024-11-18 10:35:27.350410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.694 [2024-11-18 10:35:27.350446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:01.694 pt2 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 [2024-11-18 10:35:27.360049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:01.694 [2024-11-18 10:35:27.362021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:01.694 [2024-11-18 10:35:27.362226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:01.694 [2024-11-18 10:35:27.362243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:01.694 [2024-11-18 10:35:27.362476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.694 [2024-11-18 10:35:27.362632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:01.694 [2024-11-18 10:35:27.362643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:01.694 [2024-11-18 10:35:27.362786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.695 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.695 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.695 "name": "raid_bdev1", 00:07:01.695 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:01.695 "strip_size_kb": 64, 00:07:01.695 "state": "online", 00:07:01.695 "raid_level": "raid0", 00:07:01.695 "superblock": true, 00:07:01.695 "num_base_bdevs": 2, 00:07:01.695 "num_base_bdevs_discovered": 2, 00:07:01.695 "num_base_bdevs_operational": 2, 00:07:01.695 "base_bdevs_list": [ 00:07:01.695 { 00:07:01.695 "name": "pt1", 00:07:01.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.695 "is_configured": true, 00:07:01.695 "data_offset": 2048, 00:07:01.695 "data_size": 63488 00:07:01.695 }, 00:07:01.695 { 00:07:01.695 "name": "pt2", 00:07:01.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.695 "is_configured": true, 00:07:01.695 "data_offset": 2048, 00:07:01.695 "data_size": 63488 00:07:01.695 } 00:07:01.695 ] 00:07:01.695 }' 00:07:01.695 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.695 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.954 [2024-11-18 10:35:27.783532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.954 "name": "raid_bdev1", 00:07:01.954 "aliases": [ 00:07:01.954 "e41c313c-c298-4cef-add0-d5e232868485" 00:07:01.954 ], 00:07:01.954 "product_name": "Raid Volume", 00:07:01.954 "block_size": 512, 00:07:01.954 "num_blocks": 126976, 00:07:01.954 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:01.954 "assigned_rate_limits": { 00:07:01.954 "rw_ios_per_sec": 0, 00:07:01.954 "rw_mbytes_per_sec": 0, 00:07:01.954 "r_mbytes_per_sec": 0, 00:07:01.954 "w_mbytes_per_sec": 0 00:07:01.954 }, 00:07:01.954 "claimed": false, 00:07:01.954 "zoned": false, 00:07:01.954 "supported_io_types": { 00:07:01.954 "read": true, 00:07:01.954 "write": true, 00:07:01.954 "unmap": true, 00:07:01.954 "flush": true, 00:07:01.954 "reset": true, 00:07:01.954 "nvme_admin": false, 00:07:01.954 "nvme_io": false, 00:07:01.954 "nvme_io_md": false, 00:07:01.954 "write_zeroes": true, 00:07:01.954 "zcopy": false, 00:07:01.954 "get_zone_info": false, 00:07:01.954 "zone_management": false, 00:07:01.954 "zone_append": false, 00:07:01.954 "compare": false, 00:07:01.954 "compare_and_write": false, 00:07:01.954 "abort": false, 00:07:01.954 "seek_hole": false, 00:07:01.954 "seek_data": false, 00:07:01.954 "copy": false, 00:07:01.954 "nvme_iov_md": false 00:07:01.954 }, 00:07:01.954 "memory_domains": [ 00:07:01.954 { 00:07:01.954 "dma_device_id": "system", 00:07:01.954 "dma_device_type": 1 00:07:01.954 }, 00:07:01.954 { 00:07:01.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.954 "dma_device_type": 2 00:07:01.954 }, 00:07:01.954 { 00:07:01.954 "dma_device_id": "system", 00:07:01.954 "dma_device_type": 1 00:07:01.954 }, 00:07:01.954 { 00:07:01.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.954 "dma_device_type": 2 00:07:01.954 } 00:07:01.954 ], 00:07:01.954 "driver_specific": { 00:07:01.954 "raid": { 00:07:01.954 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:01.954 "strip_size_kb": 64, 00:07:01.954 "state": "online", 00:07:01.954 "raid_level": "raid0", 00:07:01.954 "superblock": true, 00:07:01.954 "num_base_bdevs": 2, 00:07:01.954 "num_base_bdevs_discovered": 2, 00:07:01.954 "num_base_bdevs_operational": 2, 00:07:01.954 "base_bdevs_list": [ 00:07:01.954 { 00:07:01.954 "name": "pt1", 00:07:01.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.954 "is_configured": true, 00:07:01.954 "data_offset": 2048, 00:07:01.954 "data_size": 63488 00:07:01.954 }, 00:07:01.954 { 00:07:01.954 "name": "pt2", 00:07:01.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.954 "is_configured": true, 00:07:01.954 "data_offset": 2048, 00:07:01.954 "data_size": 63488 00:07:01.954 } 00:07:01.954 ] 00:07:01.954 } 00:07:01.954 } 00:07:01.954 }' 00:07:01.954 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:02.213 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:02.213 pt2' 00:07:02.213 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.213 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:02.213 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.213 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.214 10:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.214 [2024-11-18 10:35:27.991184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e41c313c-c298-4cef-add0-d5e232868485 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e41c313c-c298-4cef-add0-d5e232868485 ']' 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.214 [2024-11-18 10:35:28.038929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:02.214 [2024-11-18 10:35:28.038994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.214 [2024-11-18 10:35:28.039086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.214 [2024-11-18 10:35:28.039149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.214 [2024-11-18 10:35:28.039265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.214 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 [2024-11-18 10:35:28.174994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:02.474 [2024-11-18 10:35:28.177000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:02.474 [2024-11-18 10:35:28.177063] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:02.474 [2024-11-18 10:35:28.177107] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:02.474 [2024-11-18 10:35:28.177121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:02.474 [2024-11-18 10:35:28.177132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:02.474 request: 00:07:02.474 { 00:07:02.474 "name": "raid_bdev1", 00:07:02.474 "raid_level": "raid0", 00:07:02.474 "base_bdevs": [ 00:07:02.474 "malloc1", 00:07:02.474 "malloc2" 00:07:02.474 ], 00:07:02.474 "strip_size_kb": 64, 00:07:02.474 "superblock": false, 00:07:02.474 "method": "bdev_raid_create", 00:07:02.474 "req_id": 1 00:07:02.474 } 00:07:02.474 Got JSON-RPC error response 00:07:02.474 response: 00:07:02.474 { 00:07:02.474 "code": -17, 00:07:02.474 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:02.474 } 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 [2024-11-18 10:35:28.242988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:02.474 [2024-11-18 10:35:28.243083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.474 [2024-11-18 10:35:28.243120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:02.474 [2024-11-18 10:35:28.243152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.474 [2024-11-18 10:35:28.245632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.474 [2024-11-18 10:35:28.245703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:02.474 [2024-11-18 10:35:28.245793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:02.474 [2024-11-18 10:35:28.245868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:02.474 pt1 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.474 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.474 "name": "raid_bdev1", 00:07:02.474 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:02.474 "strip_size_kb": 64, 00:07:02.474 "state": "configuring", 00:07:02.474 "raid_level": "raid0", 00:07:02.474 "superblock": true, 00:07:02.474 "num_base_bdevs": 2, 00:07:02.474 "num_base_bdevs_discovered": 1, 00:07:02.474 "num_base_bdevs_operational": 2, 00:07:02.474 "base_bdevs_list": [ 00:07:02.474 { 00:07:02.474 "name": "pt1", 00:07:02.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.474 "is_configured": true, 00:07:02.474 "data_offset": 2048, 00:07:02.474 "data_size": 63488 00:07:02.474 }, 00:07:02.474 { 00:07:02.474 "name": null, 00:07:02.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.475 "is_configured": false, 00:07:02.475 "data_offset": 2048, 00:07:02.475 "data_size": 63488 00:07:02.475 } 00:07:02.475 ] 00:07:02.475 }' 00:07:02.475 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.475 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.042 [2024-11-18 10:35:28.662541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:03.042 [2024-11-18 10:35:28.662595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.042 [2024-11-18 10:35:28.662614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:03.042 [2024-11-18 10:35:28.662624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.042 [2024-11-18 10:35:28.663039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.042 [2024-11-18 10:35:28.663059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:03.042 [2024-11-18 10:35:28.663124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:03.042 [2024-11-18 10:35:28.663145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:03.042 [2024-11-18 10:35:28.663271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.042 [2024-11-18 10:35:28.663285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.042 [2024-11-18 10:35:28.663517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:03.042 [2024-11-18 10:35:28.663670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.042 [2024-11-18 10:35:28.663679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:03.042 [2024-11-18 10:35:28.663813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.042 pt2 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.042 "name": "raid_bdev1", 00:07:03.042 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:03.042 "strip_size_kb": 64, 00:07:03.042 "state": "online", 00:07:03.042 "raid_level": "raid0", 00:07:03.042 "superblock": true, 00:07:03.042 "num_base_bdevs": 2, 00:07:03.042 "num_base_bdevs_discovered": 2, 00:07:03.042 "num_base_bdevs_operational": 2, 00:07:03.042 "base_bdevs_list": [ 00:07:03.042 { 00:07:03.042 "name": "pt1", 00:07:03.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.042 "is_configured": true, 00:07:03.042 "data_offset": 2048, 00:07:03.042 "data_size": 63488 00:07:03.042 }, 00:07:03.042 { 00:07:03.042 "name": "pt2", 00:07:03.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.042 "is_configured": true, 00:07:03.042 "data_offset": 2048, 00:07:03.042 "data_size": 63488 00:07:03.042 } 00:07:03.042 ] 00:07:03.042 }' 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.042 10:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.301 [2024-11-18 10:35:29.062074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.301 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.301 "name": "raid_bdev1", 00:07:03.301 "aliases": [ 00:07:03.301 "e41c313c-c298-4cef-add0-d5e232868485" 00:07:03.301 ], 00:07:03.301 "product_name": "Raid Volume", 00:07:03.301 "block_size": 512, 00:07:03.301 "num_blocks": 126976, 00:07:03.301 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:03.301 "assigned_rate_limits": { 00:07:03.301 "rw_ios_per_sec": 0, 00:07:03.301 "rw_mbytes_per_sec": 0, 00:07:03.301 "r_mbytes_per_sec": 0, 00:07:03.301 "w_mbytes_per_sec": 0 00:07:03.301 }, 00:07:03.301 "claimed": false, 00:07:03.301 "zoned": false, 00:07:03.301 "supported_io_types": { 00:07:03.301 "read": true, 00:07:03.301 "write": true, 00:07:03.301 "unmap": true, 00:07:03.301 "flush": true, 00:07:03.301 "reset": true, 00:07:03.301 "nvme_admin": false, 00:07:03.301 "nvme_io": false, 00:07:03.301 "nvme_io_md": false, 00:07:03.301 "write_zeroes": true, 00:07:03.301 "zcopy": false, 00:07:03.301 "get_zone_info": false, 00:07:03.301 "zone_management": false, 00:07:03.301 "zone_append": false, 00:07:03.301 "compare": false, 00:07:03.301 "compare_and_write": false, 00:07:03.301 "abort": false, 00:07:03.301 "seek_hole": false, 00:07:03.301 "seek_data": false, 00:07:03.301 "copy": false, 00:07:03.301 "nvme_iov_md": false 00:07:03.301 }, 00:07:03.301 "memory_domains": [ 00:07:03.301 { 00:07:03.301 "dma_device_id": "system", 00:07:03.301 "dma_device_type": 1 00:07:03.301 }, 00:07:03.301 { 00:07:03.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.301 "dma_device_type": 2 00:07:03.301 }, 00:07:03.301 { 00:07:03.301 "dma_device_id": "system", 00:07:03.301 "dma_device_type": 1 00:07:03.301 }, 00:07:03.301 { 00:07:03.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.301 "dma_device_type": 2 00:07:03.301 } 00:07:03.301 ], 00:07:03.301 "driver_specific": { 00:07:03.301 "raid": { 00:07:03.301 "uuid": "e41c313c-c298-4cef-add0-d5e232868485", 00:07:03.301 "strip_size_kb": 64, 00:07:03.301 "state": "online", 00:07:03.301 "raid_level": "raid0", 00:07:03.301 "superblock": true, 00:07:03.301 "num_base_bdevs": 2, 00:07:03.301 "num_base_bdevs_discovered": 2, 00:07:03.301 "num_base_bdevs_operational": 2, 00:07:03.301 "base_bdevs_list": [ 00:07:03.301 { 00:07:03.301 "name": "pt1", 00:07:03.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.302 "is_configured": true, 00:07:03.302 "data_offset": 2048, 00:07:03.302 "data_size": 63488 00:07:03.302 }, 00:07:03.302 { 00:07:03.302 "name": "pt2", 00:07:03.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.302 "is_configured": true, 00:07:03.302 "data_offset": 2048, 00:07:03.302 "data_size": 63488 00:07:03.302 } 00:07:03.302 ] 00:07:03.302 } 00:07:03.302 } 00:07:03.302 }' 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.302 pt2' 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.302 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.561 [2024-11-18 10:35:29.289662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e41c313c-c298-4cef-add0-d5e232868485 '!=' e41c313c-c298-4cef-add0-d5e232868485 ']' 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:03.561 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61106 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61106 ']' 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61106 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61106 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61106' 00:07:03.562 killing process with pid 61106 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61106 00:07:03.562 [2024-11-18 10:35:29.355384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.562 [2024-11-18 10:35:29.355479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.562 [2024-11-18 10:35:29.355529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.562 [2024-11-18 10:35:29.355542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:03.562 10:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61106 00:07:03.821 [2024-11-18 10:35:29.575098] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.203 10:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:05.203 00:07:05.203 real 0m4.377s 00:07:05.203 user 0m6.004s 00:07:05.203 sys 0m0.742s 00:07:05.203 ************************************ 00:07:05.203 END TEST raid_superblock_test 00:07:05.203 ************************************ 00:07:05.203 10:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.203 10:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.203 10:35:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:05.203 10:35:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:05.203 10:35:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.203 10:35:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.203 ************************************ 00:07:05.203 START TEST raid_read_error_test 00:07:05.203 ************************************ 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1MF6QraTbk 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61312 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61312 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61312 ']' 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.203 10:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.203 [2024-11-18 10:35:30.878098] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:05.203 [2024-11-18 10:35:30.878296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:07:05.203 [2024-11-18 10:35:31.072011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.472 [2024-11-18 10:35:31.206532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.742 [2024-11-18 10:35:31.439297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.742 [2024-11-18 10:35:31.439368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 BaseBdev1_malloc 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 true 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 [2024-11-18 10:35:31.752202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:06.002 [2024-11-18 10:35:31.752362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.002 [2024-11-18 10:35:31.752400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:06.002 [2024-11-18 10:35:31.752431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.002 [2024-11-18 10:35:31.754733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.002 [2024-11-18 10:35:31.754811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:06.002 BaseBdev1 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 BaseBdev2_malloc 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 true 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 [2024-11-18 10:35:31.820271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:06.002 [2024-11-18 10:35:31.820329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.003 [2024-11-18 10:35:31.820345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:06.003 [2024-11-18 10:35:31.820356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.003 [2024-11-18 10:35:31.822609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.003 [2024-11-18 10:35:31.822645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:06.003 BaseBdev2 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.003 [2024-11-18 10:35:31.832319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.003 [2024-11-18 10:35:31.834303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.003 [2024-11-18 10:35:31.834497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:06.003 [2024-11-18 10:35:31.834520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.003 [2024-11-18 10:35:31.834735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:06.003 [2024-11-18 10:35:31.834937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:06.003 [2024-11-18 10:35:31.834968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:06.003 [2024-11-18 10:35:31.835112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.003 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.263 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.263 "name": "raid_bdev1", 00:07:06.263 "uuid": "a3d383f6-11f1-4349-805c-9bab4953e58d", 00:07:06.263 "strip_size_kb": 64, 00:07:06.263 "state": "online", 00:07:06.263 "raid_level": "raid0", 00:07:06.263 "superblock": true, 00:07:06.263 "num_base_bdevs": 2, 00:07:06.263 "num_base_bdevs_discovered": 2, 00:07:06.263 "num_base_bdevs_operational": 2, 00:07:06.263 "base_bdevs_list": [ 00:07:06.263 { 00:07:06.263 "name": "BaseBdev1", 00:07:06.263 "uuid": "b8b191e1-580f-5ca2-bb38-fad3024f6dbb", 00:07:06.263 "is_configured": true, 00:07:06.263 "data_offset": 2048, 00:07:06.263 "data_size": 63488 00:07:06.263 }, 00:07:06.263 { 00:07:06.263 "name": "BaseBdev2", 00:07:06.263 "uuid": "2822c25d-1037-59ea-a17c-52d7c0cf1dc0", 00:07:06.263 "is_configured": true, 00:07:06.263 "data_offset": 2048, 00:07:06.263 "data_size": 63488 00:07:06.263 } 00:07:06.263 ] 00:07:06.263 }' 00:07:06.263 10:35:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.263 10:35:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.522 10:35:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:06.522 10:35:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:06.522 [2024-11-18 10:35:32.352802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.460 "name": "raid_bdev1", 00:07:07.460 "uuid": "a3d383f6-11f1-4349-805c-9bab4953e58d", 00:07:07.460 "strip_size_kb": 64, 00:07:07.460 "state": "online", 00:07:07.460 "raid_level": "raid0", 00:07:07.460 "superblock": true, 00:07:07.460 "num_base_bdevs": 2, 00:07:07.460 "num_base_bdevs_discovered": 2, 00:07:07.460 "num_base_bdevs_operational": 2, 00:07:07.460 "base_bdevs_list": [ 00:07:07.460 { 00:07:07.460 "name": "BaseBdev1", 00:07:07.460 "uuid": "b8b191e1-580f-5ca2-bb38-fad3024f6dbb", 00:07:07.460 "is_configured": true, 00:07:07.460 "data_offset": 2048, 00:07:07.460 "data_size": 63488 00:07:07.460 }, 00:07:07.460 { 00:07:07.460 "name": "BaseBdev2", 00:07:07.460 "uuid": "2822c25d-1037-59ea-a17c-52d7c0cf1dc0", 00:07:07.460 "is_configured": true, 00:07:07.460 "data_offset": 2048, 00:07:07.460 "data_size": 63488 00:07:07.460 } 00:07:07.460 ] 00:07:07.460 }' 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.460 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.028 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.028 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.028 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.028 [2024-11-18 10:35:33.704791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.028 [2024-11-18 10:35:33.704843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.028 [2024-11-18 10:35:33.707293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.028 [2024-11-18 10:35:33.707346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.028 [2024-11-18 10:35:33.707380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.028 [2024-11-18 10:35:33.707392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:08.028 { 00:07:08.028 "results": [ 00:07:08.028 { 00:07:08.028 "job": "raid_bdev1", 00:07:08.028 "core_mask": "0x1", 00:07:08.028 "workload": "randrw", 00:07:08.028 "percentage": 50, 00:07:08.029 "status": "finished", 00:07:08.029 "queue_depth": 1, 00:07:08.029 "io_size": 131072, 00:07:08.029 "runtime": 1.352654, 00:07:08.029 "iops": 15406.74851070562, 00:07:08.029 "mibps": 1925.8435638382025, 00:07:08.029 "io_failed": 1, 00:07:08.029 "io_timeout": 0, 00:07:08.029 "avg_latency_us": 91.31270595477632, 00:07:08.029 "min_latency_us": 24.370305676855896, 00:07:08.029 "max_latency_us": 1359.3711790393013 00:07:08.029 } 00:07:08.029 ], 00:07:08.029 "core_count": 1 00:07:08.029 } 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61312 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61312 ']' 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61312 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61312 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.029 killing process with pid 61312 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61312' 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61312 00:07:08.029 [2024-11-18 10:35:33.740321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.029 10:35:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61312 00:07:08.029 [2024-11-18 10:35:33.879181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1MF6QraTbk 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:09.411 00:07:09.411 real 0m4.331s 00:07:09.411 user 0m5.059s 00:07:09.411 sys 0m0.610s 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.411 10:35:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 ************************************ 00:07:09.411 END TEST raid_read_error_test 00:07:09.411 ************************************ 00:07:09.411 10:35:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:09.411 10:35:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.411 10:35:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.411 10:35:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 ************************************ 00:07:09.411 START TEST raid_write_error_test 00:07:09.411 ************************************ 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gnX2Nnxg4r 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61452 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61452 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61452 ']' 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.411 10:35:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 [2024-11-18 10:35:35.269747] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:09.411 [2024-11-18 10:35:35.269851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61452 ] 00:07:09.672 [2024-11-18 10:35:35.442720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.932 [2024-11-18 10:35:35.575861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.932 [2024-11-18 10:35:35.806972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.932 [2024-11-18 10:35:35.807056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.500 BaseBdev1_malloc 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.500 true 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.500 [2024-11-18 10:35:36.152602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:10.500 [2024-11-18 10:35:36.152665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.500 [2024-11-18 10:35:36.152685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:10.500 [2024-11-18 10:35:36.152699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.500 [2024-11-18 10:35:36.155015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.500 [2024-11-18 10:35:36.155050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:10.500 BaseBdev1 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.500 BaseBdev2_malloc 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.500 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.501 true 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.501 [2024-11-18 10:35:36.224293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:10.501 [2024-11-18 10:35:36.224341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.501 [2024-11-18 10:35:36.224358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:10.501 [2024-11-18 10:35:36.224369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.501 [2024-11-18 10:35:36.226586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.501 [2024-11-18 10:35:36.226619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:10.501 BaseBdev2 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.501 [2024-11-18 10:35:36.236339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.501 [2024-11-18 10:35:36.238314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.501 [2024-11-18 10:35:36.238499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.501 [2024-11-18 10:35:36.238520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.501 [2024-11-18 10:35:36.238735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:10.501 [2024-11-18 10:35:36.238927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.501 [2024-11-18 10:35:36.238951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:10.501 [2024-11-18 10:35:36.239112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.501 "name": "raid_bdev1", 00:07:10.501 "uuid": "f0379e0c-022d-41d3-a9d6-5f938d78aa69", 00:07:10.501 "strip_size_kb": 64, 00:07:10.501 "state": "online", 00:07:10.501 "raid_level": "raid0", 00:07:10.501 "superblock": true, 00:07:10.501 "num_base_bdevs": 2, 00:07:10.501 "num_base_bdevs_discovered": 2, 00:07:10.501 "num_base_bdevs_operational": 2, 00:07:10.501 "base_bdevs_list": [ 00:07:10.501 { 00:07:10.501 "name": "BaseBdev1", 00:07:10.501 "uuid": "cb61ea7b-8148-5da6-b0bf-fd09a51a9321", 00:07:10.501 "is_configured": true, 00:07:10.501 "data_offset": 2048, 00:07:10.501 "data_size": 63488 00:07:10.501 }, 00:07:10.501 { 00:07:10.501 "name": "BaseBdev2", 00:07:10.501 "uuid": "efa73246-7ace-5cb1-bfd7-8a95d94666da", 00:07:10.501 "is_configured": true, 00:07:10.501 "data_offset": 2048, 00:07:10.501 "data_size": 63488 00:07:10.501 } 00:07:10.501 ] 00:07:10.501 }' 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.501 10:35:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.761 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:10.761 10:35:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:11.021 [2024-11-18 10:35:36.725059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.960 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.960 "name": "raid_bdev1", 00:07:11.960 "uuid": "f0379e0c-022d-41d3-a9d6-5f938d78aa69", 00:07:11.960 "strip_size_kb": 64, 00:07:11.960 "state": "online", 00:07:11.960 "raid_level": "raid0", 00:07:11.960 "superblock": true, 00:07:11.960 "num_base_bdevs": 2, 00:07:11.960 "num_base_bdevs_discovered": 2, 00:07:11.960 "num_base_bdevs_operational": 2, 00:07:11.960 "base_bdevs_list": [ 00:07:11.960 { 00:07:11.960 "name": "BaseBdev1", 00:07:11.960 "uuid": "cb61ea7b-8148-5da6-b0bf-fd09a51a9321", 00:07:11.960 "is_configured": true, 00:07:11.960 "data_offset": 2048, 00:07:11.960 "data_size": 63488 00:07:11.960 }, 00:07:11.960 { 00:07:11.960 "name": "BaseBdev2", 00:07:11.961 "uuid": "efa73246-7ace-5cb1-bfd7-8a95d94666da", 00:07:11.961 "is_configured": true, 00:07:11.961 "data_offset": 2048, 00:07:11.961 "data_size": 63488 00:07:11.961 } 00:07:11.961 ] 00:07:11.961 }' 00:07:11.961 10:35:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.961 10:35:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.221 [2024-11-18 10:35:38.076952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.221 [2024-11-18 10:35:38.077012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.221 [2024-11-18 10:35:38.079632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.221 [2024-11-18 10:35:38.079686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.221 [2024-11-18 10:35:38.079721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.221 [2024-11-18 10:35:38.079734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:12.221 { 00:07:12.221 "results": [ 00:07:12.221 { 00:07:12.221 "job": "raid_bdev1", 00:07:12.221 "core_mask": "0x1", 00:07:12.221 "workload": "randrw", 00:07:12.221 "percentage": 50, 00:07:12.221 "status": "finished", 00:07:12.221 "queue_depth": 1, 00:07:12.221 "io_size": 131072, 00:07:12.221 "runtime": 1.352593, 00:07:12.221 "iops": 14854.431451293922, 00:07:12.221 "mibps": 1856.8039314117402, 00:07:12.221 "io_failed": 1, 00:07:12.221 "io_timeout": 0, 00:07:12.221 "avg_latency_us": 94.62081739127034, 00:07:12.221 "min_latency_us": 24.929257641921396, 00:07:12.221 "max_latency_us": 1387.989519650655 00:07:12.221 } 00:07:12.221 ], 00:07:12.221 "core_count": 1 00:07:12.221 } 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61452 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61452 ']' 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61452 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:12.221 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.222 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61452 00:07:12.482 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.482 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.482 killing process with pid 61452 00:07:12.482 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61452' 00:07:12.482 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61452 00:07:12.482 [2024-11-18 10:35:38.127852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.482 10:35:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61452 00:07:12.482 [2024-11-18 10:35:38.275656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gnX2Nnxg4r 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:13.864 00:07:13.864 real 0m4.342s 00:07:13.864 user 0m5.033s 00:07:13.864 sys 0m0.629s 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.864 10:35:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.864 ************************************ 00:07:13.864 END TEST raid_write_error_test 00:07:13.864 ************************************ 00:07:13.864 10:35:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:13.864 10:35:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:13.864 10:35:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.864 10:35:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.864 10:35:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.864 ************************************ 00:07:13.864 START TEST raid_state_function_test 00:07:13.864 ************************************ 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61596 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61596' 00:07:13.864 Process raid pid: 61596 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61596 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61596 ']' 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.864 10:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.864 [2024-11-18 10:35:39.682801] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:13.864 [2024-11-18 10:35:39.682920] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.124 [2024-11-18 10:35:39.863040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.124 [2024-11-18 10:35:40.001411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.384 [2024-11-18 10:35:40.238895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.384 [2024-11-18 10:35:40.238949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.644 [2024-11-18 10:35:40.507617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.644 [2024-11-18 10:35:40.507676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.644 [2024-11-18 10:35:40.507687] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.644 [2024-11-18 10:35:40.507697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.644 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.903 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.903 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.903 "name": "Existed_Raid", 00:07:14.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.903 "strip_size_kb": 64, 00:07:14.903 "state": "configuring", 00:07:14.903 "raid_level": "concat", 00:07:14.903 "superblock": false, 00:07:14.903 "num_base_bdevs": 2, 00:07:14.903 "num_base_bdevs_discovered": 0, 00:07:14.903 "num_base_bdevs_operational": 2, 00:07:14.903 "base_bdevs_list": [ 00:07:14.903 { 00:07:14.903 "name": "BaseBdev1", 00:07:14.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.903 "is_configured": false, 00:07:14.903 "data_offset": 0, 00:07:14.903 "data_size": 0 00:07:14.903 }, 00:07:14.903 { 00:07:14.903 "name": "BaseBdev2", 00:07:14.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.903 "is_configured": false, 00:07:14.903 "data_offset": 0, 00:07:14.903 "data_size": 0 00:07:14.903 } 00:07:14.903 ] 00:07:14.903 }' 00:07:14.904 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.904 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.164 [2024-11-18 10:35:40.938928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.164 [2024-11-18 10:35:40.938971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.164 [2024-11-18 10:35:40.950898] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.164 [2024-11-18 10:35:40.950943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.164 [2024-11-18 10:35:40.950953] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.164 [2024-11-18 10:35:40.950965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.164 10:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.164 [2024-11-18 10:35:41.003982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.164 BaseBdev1 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.164 [ 00:07:15.164 { 00:07:15.164 "name": "BaseBdev1", 00:07:15.164 "aliases": [ 00:07:15.164 "e3883bb3-7d51-4bb3-a89f-ed6e637b3b24" 00:07:15.164 ], 00:07:15.164 "product_name": "Malloc disk", 00:07:15.164 "block_size": 512, 00:07:15.164 "num_blocks": 65536, 00:07:15.164 "uuid": "e3883bb3-7d51-4bb3-a89f-ed6e637b3b24", 00:07:15.164 "assigned_rate_limits": { 00:07:15.164 "rw_ios_per_sec": 0, 00:07:15.164 "rw_mbytes_per_sec": 0, 00:07:15.164 "r_mbytes_per_sec": 0, 00:07:15.164 "w_mbytes_per_sec": 0 00:07:15.164 }, 00:07:15.164 "claimed": true, 00:07:15.164 "claim_type": "exclusive_write", 00:07:15.164 "zoned": false, 00:07:15.164 "supported_io_types": { 00:07:15.164 "read": true, 00:07:15.164 "write": true, 00:07:15.164 "unmap": true, 00:07:15.164 "flush": true, 00:07:15.164 "reset": true, 00:07:15.164 "nvme_admin": false, 00:07:15.164 "nvme_io": false, 00:07:15.164 "nvme_io_md": false, 00:07:15.164 "write_zeroes": true, 00:07:15.164 "zcopy": true, 00:07:15.164 "get_zone_info": false, 00:07:15.164 "zone_management": false, 00:07:15.164 "zone_append": false, 00:07:15.164 "compare": false, 00:07:15.164 "compare_and_write": false, 00:07:15.164 "abort": true, 00:07:15.164 "seek_hole": false, 00:07:15.164 "seek_data": false, 00:07:15.164 "copy": true, 00:07:15.164 "nvme_iov_md": false 00:07:15.164 }, 00:07:15.164 "memory_domains": [ 00:07:15.164 { 00:07:15.164 "dma_device_id": "system", 00:07:15.164 "dma_device_type": 1 00:07:15.164 }, 00:07:15.164 { 00:07:15.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.164 "dma_device_type": 2 00:07:15.164 } 00:07:15.164 ], 00:07:15.164 "driver_specific": {} 00:07:15.164 } 00:07:15.164 ] 00:07:15.164 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.165 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.424 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.424 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.424 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.424 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.424 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.424 "name": "Existed_Raid", 00:07:15.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.425 "strip_size_kb": 64, 00:07:15.425 "state": "configuring", 00:07:15.425 "raid_level": "concat", 00:07:15.425 "superblock": false, 00:07:15.425 "num_base_bdevs": 2, 00:07:15.425 "num_base_bdevs_discovered": 1, 00:07:15.425 "num_base_bdevs_operational": 2, 00:07:15.425 "base_bdevs_list": [ 00:07:15.425 { 00:07:15.425 "name": "BaseBdev1", 00:07:15.425 "uuid": "e3883bb3-7d51-4bb3-a89f-ed6e637b3b24", 00:07:15.425 "is_configured": true, 00:07:15.425 "data_offset": 0, 00:07:15.425 "data_size": 65536 00:07:15.425 }, 00:07:15.425 { 00:07:15.425 "name": "BaseBdev2", 00:07:15.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.425 "is_configured": false, 00:07:15.425 "data_offset": 0, 00:07:15.425 "data_size": 0 00:07:15.425 } 00:07:15.425 ] 00:07:15.425 }' 00:07:15.425 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.425 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.683 [2024-11-18 10:35:41.495190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.683 [2024-11-18 10:35:41.495244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.683 [2024-11-18 10:35:41.507221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.683 [2024-11-18 10:35:41.509195] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.683 [2024-11-18 10:35:41.509233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:15.683 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.684 "name": "Existed_Raid", 00:07:15.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.684 "strip_size_kb": 64, 00:07:15.684 "state": "configuring", 00:07:15.684 "raid_level": "concat", 00:07:15.684 "superblock": false, 00:07:15.684 "num_base_bdevs": 2, 00:07:15.684 "num_base_bdevs_discovered": 1, 00:07:15.684 "num_base_bdevs_operational": 2, 00:07:15.684 "base_bdevs_list": [ 00:07:15.684 { 00:07:15.684 "name": "BaseBdev1", 00:07:15.684 "uuid": "e3883bb3-7d51-4bb3-a89f-ed6e637b3b24", 00:07:15.684 "is_configured": true, 00:07:15.684 "data_offset": 0, 00:07:15.684 "data_size": 65536 00:07:15.684 }, 00:07:15.684 { 00:07:15.684 "name": "BaseBdev2", 00:07:15.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.684 "is_configured": false, 00:07:15.684 "data_offset": 0, 00:07:15.684 "data_size": 0 00:07:15.684 } 00:07:15.684 ] 00:07:15.684 }' 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.684 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.278 [2024-11-18 10:35:41.974154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.278 [2024-11-18 10:35:41.974220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:16.278 [2024-11-18 10:35:41.974229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:16.278 [2024-11-18 10:35:41.974520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.278 [2024-11-18 10:35:41.974697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:16.278 [2024-11-18 10:35:41.974719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:16.278 [2024-11-18 10:35:41.975014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.278 BaseBdev2 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.278 10:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.278 [ 00:07:16.278 { 00:07:16.278 "name": "BaseBdev2", 00:07:16.278 "aliases": [ 00:07:16.278 "c2bbbeb7-e2c2-42d7-91eb-f9adcefb1602" 00:07:16.278 ], 00:07:16.278 "product_name": "Malloc disk", 00:07:16.278 "block_size": 512, 00:07:16.278 "num_blocks": 65536, 00:07:16.278 "uuid": "c2bbbeb7-e2c2-42d7-91eb-f9adcefb1602", 00:07:16.278 "assigned_rate_limits": { 00:07:16.278 "rw_ios_per_sec": 0, 00:07:16.278 "rw_mbytes_per_sec": 0, 00:07:16.278 "r_mbytes_per_sec": 0, 00:07:16.278 "w_mbytes_per_sec": 0 00:07:16.278 }, 00:07:16.278 "claimed": true, 00:07:16.278 "claim_type": "exclusive_write", 00:07:16.278 "zoned": false, 00:07:16.278 "supported_io_types": { 00:07:16.278 "read": true, 00:07:16.278 "write": true, 00:07:16.278 "unmap": true, 00:07:16.278 "flush": true, 00:07:16.278 "reset": true, 00:07:16.278 "nvme_admin": false, 00:07:16.278 "nvme_io": false, 00:07:16.278 "nvme_io_md": false, 00:07:16.278 "write_zeroes": true, 00:07:16.278 "zcopy": true, 00:07:16.278 "get_zone_info": false, 00:07:16.278 "zone_management": false, 00:07:16.278 "zone_append": false, 00:07:16.278 "compare": false, 00:07:16.278 "compare_and_write": false, 00:07:16.278 "abort": true, 00:07:16.278 "seek_hole": false, 00:07:16.278 "seek_data": false, 00:07:16.278 "copy": true, 00:07:16.278 "nvme_iov_md": false 00:07:16.278 }, 00:07:16.278 "memory_domains": [ 00:07:16.278 { 00:07:16.278 "dma_device_id": "system", 00:07:16.278 "dma_device_type": 1 00:07:16.278 }, 00:07:16.278 { 00:07:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.278 "dma_device_type": 2 00:07:16.278 } 00:07:16.278 ], 00:07:16.278 "driver_specific": {} 00:07:16.278 } 00:07:16.278 ] 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.278 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.279 "name": "Existed_Raid", 00:07:16.279 "uuid": "47ad9e2f-7283-4fa1-a746-b7044c614a8f", 00:07:16.279 "strip_size_kb": 64, 00:07:16.279 "state": "online", 00:07:16.279 "raid_level": "concat", 00:07:16.279 "superblock": false, 00:07:16.279 "num_base_bdevs": 2, 00:07:16.279 "num_base_bdevs_discovered": 2, 00:07:16.279 "num_base_bdevs_operational": 2, 00:07:16.279 "base_bdevs_list": [ 00:07:16.279 { 00:07:16.279 "name": "BaseBdev1", 00:07:16.279 "uuid": "e3883bb3-7d51-4bb3-a89f-ed6e637b3b24", 00:07:16.279 "is_configured": true, 00:07:16.279 "data_offset": 0, 00:07:16.279 "data_size": 65536 00:07:16.279 }, 00:07:16.279 { 00:07:16.279 "name": "BaseBdev2", 00:07:16.279 "uuid": "c2bbbeb7-e2c2-42d7-91eb-f9adcefb1602", 00:07:16.279 "is_configured": true, 00:07:16.279 "data_offset": 0, 00:07:16.279 "data_size": 65536 00:07:16.279 } 00:07:16.279 ] 00:07:16.279 }' 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.279 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.849 [2024-11-18 10:35:42.453607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.849 "name": "Existed_Raid", 00:07:16.849 "aliases": [ 00:07:16.849 "47ad9e2f-7283-4fa1-a746-b7044c614a8f" 00:07:16.849 ], 00:07:16.849 "product_name": "Raid Volume", 00:07:16.849 "block_size": 512, 00:07:16.849 "num_blocks": 131072, 00:07:16.849 "uuid": "47ad9e2f-7283-4fa1-a746-b7044c614a8f", 00:07:16.849 "assigned_rate_limits": { 00:07:16.849 "rw_ios_per_sec": 0, 00:07:16.849 "rw_mbytes_per_sec": 0, 00:07:16.849 "r_mbytes_per_sec": 0, 00:07:16.849 "w_mbytes_per_sec": 0 00:07:16.849 }, 00:07:16.849 "claimed": false, 00:07:16.849 "zoned": false, 00:07:16.849 "supported_io_types": { 00:07:16.849 "read": true, 00:07:16.849 "write": true, 00:07:16.849 "unmap": true, 00:07:16.849 "flush": true, 00:07:16.849 "reset": true, 00:07:16.849 "nvme_admin": false, 00:07:16.849 "nvme_io": false, 00:07:16.849 "nvme_io_md": false, 00:07:16.849 "write_zeroes": true, 00:07:16.849 "zcopy": false, 00:07:16.849 "get_zone_info": false, 00:07:16.849 "zone_management": false, 00:07:16.849 "zone_append": false, 00:07:16.849 "compare": false, 00:07:16.849 "compare_and_write": false, 00:07:16.849 "abort": false, 00:07:16.849 "seek_hole": false, 00:07:16.849 "seek_data": false, 00:07:16.849 "copy": false, 00:07:16.849 "nvme_iov_md": false 00:07:16.849 }, 00:07:16.849 "memory_domains": [ 00:07:16.849 { 00:07:16.849 "dma_device_id": "system", 00:07:16.849 "dma_device_type": 1 00:07:16.849 }, 00:07:16.849 { 00:07:16.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.849 "dma_device_type": 2 00:07:16.849 }, 00:07:16.849 { 00:07:16.849 "dma_device_id": "system", 00:07:16.849 "dma_device_type": 1 00:07:16.849 }, 00:07:16.849 { 00:07:16.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.849 "dma_device_type": 2 00:07:16.849 } 00:07:16.849 ], 00:07:16.849 "driver_specific": { 00:07:16.849 "raid": { 00:07:16.849 "uuid": "47ad9e2f-7283-4fa1-a746-b7044c614a8f", 00:07:16.849 "strip_size_kb": 64, 00:07:16.849 "state": "online", 00:07:16.849 "raid_level": "concat", 00:07:16.849 "superblock": false, 00:07:16.849 "num_base_bdevs": 2, 00:07:16.849 "num_base_bdevs_discovered": 2, 00:07:16.849 "num_base_bdevs_operational": 2, 00:07:16.849 "base_bdevs_list": [ 00:07:16.849 { 00:07:16.849 "name": "BaseBdev1", 00:07:16.849 "uuid": "e3883bb3-7d51-4bb3-a89f-ed6e637b3b24", 00:07:16.849 "is_configured": true, 00:07:16.849 "data_offset": 0, 00:07:16.849 "data_size": 65536 00:07:16.849 }, 00:07:16.849 { 00:07:16.849 "name": "BaseBdev2", 00:07:16.849 "uuid": "c2bbbeb7-e2c2-42d7-91eb-f9adcefb1602", 00:07:16.849 "is_configured": true, 00:07:16.849 "data_offset": 0, 00:07:16.849 "data_size": 65536 00:07:16.849 } 00:07:16.849 ] 00:07:16.849 } 00:07:16.849 } 00:07:16.849 }' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:16.849 BaseBdev2' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.849 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.849 [2024-11-18 10:35:42.696959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.849 [2024-11-18 10:35:42.697024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.849 [2024-11-18 10:35:42.697073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.108 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.109 "name": "Existed_Raid", 00:07:17.109 "uuid": "47ad9e2f-7283-4fa1-a746-b7044c614a8f", 00:07:17.109 "strip_size_kb": 64, 00:07:17.109 "state": "offline", 00:07:17.109 "raid_level": "concat", 00:07:17.109 "superblock": false, 00:07:17.109 "num_base_bdevs": 2, 00:07:17.109 "num_base_bdevs_discovered": 1, 00:07:17.109 "num_base_bdevs_operational": 1, 00:07:17.109 "base_bdevs_list": [ 00:07:17.109 { 00:07:17.109 "name": null, 00:07:17.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.109 "is_configured": false, 00:07:17.109 "data_offset": 0, 00:07:17.109 "data_size": 65536 00:07:17.109 }, 00:07:17.109 { 00:07:17.109 "name": "BaseBdev2", 00:07:17.109 "uuid": "c2bbbeb7-e2c2-42d7-91eb-f9adcefb1602", 00:07:17.109 "is_configured": true, 00:07:17.109 "data_offset": 0, 00:07:17.109 "data_size": 65536 00:07:17.109 } 00:07:17.109 ] 00:07:17.109 }' 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.109 10:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.368 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.628 [2024-11-18 10:35:43.251799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.628 [2024-11-18 10:35:43.251867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61596 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61596 ']' 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61596 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61596 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.628 killing process with pid 61596 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61596' 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61596 00:07:17.628 [2024-11-18 10:35:43.449132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.628 10:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61596 00:07:17.628 [2024-11-18 10:35:43.465550] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.010 00:07:19.010 real 0m5.035s 00:07:19.010 user 0m7.102s 00:07:19.010 sys 0m0.967s 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.010 ************************************ 00:07:19.010 END TEST raid_state_function_test 00:07:19.010 ************************************ 00:07:19.010 10:35:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:19.010 10:35:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.010 10:35:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.010 10:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.010 ************************************ 00:07:19.010 START TEST raid_state_function_test_sb 00:07:19.010 ************************************ 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61849 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61849' 00:07:19.010 Process raid pid: 61849 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61849 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61849 ']' 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.010 10:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.010 [2024-11-18 10:35:44.786829] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:19.010 [2024-11-18 10:35:44.786944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.270 [2024-11-18 10:35:44.964120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.270 [2024-11-18 10:35:45.090286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.529 [2024-11-18 10:35:45.320894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.529 [2024-11-18 10:35:45.320941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 [2024-11-18 10:35:45.595344] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.788 [2024-11-18 10:35:45.595393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.788 [2024-11-18 10:35:45.595402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.788 [2024-11-18 10:35:45.595412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.788 "name": "Existed_Raid", 00:07:19.788 "uuid": "9aba605c-422f-4922-b5d2-6652d4aff431", 00:07:19.788 "strip_size_kb": 64, 00:07:19.788 "state": "configuring", 00:07:19.788 "raid_level": "concat", 00:07:19.788 "superblock": true, 00:07:19.788 "num_base_bdevs": 2, 00:07:19.788 "num_base_bdevs_discovered": 0, 00:07:19.788 "num_base_bdevs_operational": 2, 00:07:19.788 "base_bdevs_list": [ 00:07:19.788 { 00:07:19.788 "name": "BaseBdev1", 00:07:19.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.788 "is_configured": false, 00:07:19.788 "data_offset": 0, 00:07:19.788 "data_size": 0 00:07:19.788 }, 00:07:19.788 { 00:07:19.788 "name": "BaseBdev2", 00:07:19.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.788 "is_configured": false, 00:07:19.788 "data_offset": 0, 00:07:19.788 "data_size": 0 00:07:19.788 } 00:07:19.788 ] 00:07:19.788 }' 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.788 10:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.357 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.358 [2024-11-18 10:35:46.010548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.358 [2024-11-18 10:35:46.010589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.358 [2024-11-18 10:35:46.022541] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.358 [2024-11-18 10:35:46.022579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.358 [2024-11-18 10:35:46.022588] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.358 [2024-11-18 10:35:46.022601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.358 [2024-11-18 10:35:46.074894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.358 BaseBdev1 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.358 [ 00:07:20.358 { 00:07:20.358 "name": "BaseBdev1", 00:07:20.358 "aliases": [ 00:07:20.358 "d4cca1f4-a062-4857-876a-4b2a91b0453c" 00:07:20.358 ], 00:07:20.358 "product_name": "Malloc disk", 00:07:20.358 "block_size": 512, 00:07:20.358 "num_blocks": 65536, 00:07:20.358 "uuid": "d4cca1f4-a062-4857-876a-4b2a91b0453c", 00:07:20.358 "assigned_rate_limits": { 00:07:20.358 "rw_ios_per_sec": 0, 00:07:20.358 "rw_mbytes_per_sec": 0, 00:07:20.358 "r_mbytes_per_sec": 0, 00:07:20.358 "w_mbytes_per_sec": 0 00:07:20.358 }, 00:07:20.358 "claimed": true, 00:07:20.358 "claim_type": "exclusive_write", 00:07:20.358 "zoned": false, 00:07:20.358 "supported_io_types": { 00:07:20.358 "read": true, 00:07:20.358 "write": true, 00:07:20.358 "unmap": true, 00:07:20.358 "flush": true, 00:07:20.358 "reset": true, 00:07:20.358 "nvme_admin": false, 00:07:20.358 "nvme_io": false, 00:07:20.358 "nvme_io_md": false, 00:07:20.358 "write_zeroes": true, 00:07:20.358 "zcopy": true, 00:07:20.358 "get_zone_info": false, 00:07:20.358 "zone_management": false, 00:07:20.358 "zone_append": false, 00:07:20.358 "compare": false, 00:07:20.358 "compare_and_write": false, 00:07:20.358 "abort": true, 00:07:20.358 "seek_hole": false, 00:07:20.358 "seek_data": false, 00:07:20.358 "copy": true, 00:07:20.358 "nvme_iov_md": false 00:07:20.358 }, 00:07:20.358 "memory_domains": [ 00:07:20.358 { 00:07:20.358 "dma_device_id": "system", 00:07:20.358 "dma_device_type": 1 00:07:20.358 }, 00:07:20.358 { 00:07:20.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.358 "dma_device_type": 2 00:07:20.358 } 00:07:20.358 ], 00:07:20.358 "driver_specific": {} 00:07:20.358 } 00:07:20.358 ] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.358 "name": "Existed_Raid", 00:07:20.358 "uuid": "afdb2373-1ae3-4031-a143-f2a7a88bd3db", 00:07:20.358 "strip_size_kb": 64, 00:07:20.358 "state": "configuring", 00:07:20.358 "raid_level": "concat", 00:07:20.358 "superblock": true, 00:07:20.358 "num_base_bdevs": 2, 00:07:20.358 "num_base_bdevs_discovered": 1, 00:07:20.358 "num_base_bdevs_operational": 2, 00:07:20.358 "base_bdevs_list": [ 00:07:20.358 { 00:07:20.358 "name": "BaseBdev1", 00:07:20.358 "uuid": "d4cca1f4-a062-4857-876a-4b2a91b0453c", 00:07:20.358 "is_configured": true, 00:07:20.358 "data_offset": 2048, 00:07:20.358 "data_size": 63488 00:07:20.358 }, 00:07:20.358 { 00:07:20.358 "name": "BaseBdev2", 00:07:20.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.358 "is_configured": false, 00:07:20.358 "data_offset": 0, 00:07:20.358 "data_size": 0 00:07:20.358 } 00:07:20.358 ] 00:07:20.358 }' 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.358 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 [2024-11-18 10:35:46.554065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.925 [2024-11-18 10:35:46.554117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.925 [2024-11-18 10:35:46.562115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.925 [2024-11-18 10:35:46.564157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.925 [2024-11-18 10:35:46.564209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.925 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.926 "name": "Existed_Raid", 00:07:20.926 "uuid": "71cb3d1e-b7dd-4ca4-aac2-d9294bfaae8a", 00:07:20.926 "strip_size_kb": 64, 00:07:20.926 "state": "configuring", 00:07:20.926 "raid_level": "concat", 00:07:20.926 "superblock": true, 00:07:20.926 "num_base_bdevs": 2, 00:07:20.926 "num_base_bdevs_discovered": 1, 00:07:20.926 "num_base_bdevs_operational": 2, 00:07:20.926 "base_bdevs_list": [ 00:07:20.926 { 00:07:20.926 "name": "BaseBdev1", 00:07:20.926 "uuid": "d4cca1f4-a062-4857-876a-4b2a91b0453c", 00:07:20.926 "is_configured": true, 00:07:20.926 "data_offset": 2048, 00:07:20.926 "data_size": 63488 00:07:20.926 }, 00:07:20.926 { 00:07:20.926 "name": "BaseBdev2", 00:07:20.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.926 "is_configured": false, 00:07:20.926 "data_offset": 0, 00:07:20.926 "data_size": 0 00:07:20.926 } 00:07:20.926 ] 00:07:20.926 }' 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.926 10:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.185 [2024-11-18 10:35:47.053856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.185 [2024-11-18 10:35:47.054131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:21.185 [2024-11-18 10:35:47.054146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.185 [2024-11-18 10:35:47.054449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.185 [2024-11-18 10:35:47.054619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:21.185 [2024-11-18 10:35:47.054633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:21.185 [2024-11-18 10:35:47.054790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.185 BaseBdev2 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.185 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.445 [ 00:07:21.445 { 00:07:21.445 "name": "BaseBdev2", 00:07:21.445 "aliases": [ 00:07:21.445 "232a5e38-bebe-45fb-aa26-e860d03b88dd" 00:07:21.445 ], 00:07:21.445 "product_name": "Malloc disk", 00:07:21.445 "block_size": 512, 00:07:21.445 "num_blocks": 65536, 00:07:21.445 "uuid": "232a5e38-bebe-45fb-aa26-e860d03b88dd", 00:07:21.445 "assigned_rate_limits": { 00:07:21.445 "rw_ios_per_sec": 0, 00:07:21.445 "rw_mbytes_per_sec": 0, 00:07:21.445 "r_mbytes_per_sec": 0, 00:07:21.445 "w_mbytes_per_sec": 0 00:07:21.445 }, 00:07:21.445 "claimed": true, 00:07:21.445 "claim_type": "exclusive_write", 00:07:21.445 "zoned": false, 00:07:21.445 "supported_io_types": { 00:07:21.445 "read": true, 00:07:21.445 "write": true, 00:07:21.445 "unmap": true, 00:07:21.445 "flush": true, 00:07:21.445 "reset": true, 00:07:21.445 "nvme_admin": false, 00:07:21.445 "nvme_io": false, 00:07:21.445 "nvme_io_md": false, 00:07:21.445 "write_zeroes": true, 00:07:21.445 "zcopy": true, 00:07:21.445 "get_zone_info": false, 00:07:21.445 "zone_management": false, 00:07:21.445 "zone_append": false, 00:07:21.445 "compare": false, 00:07:21.445 "compare_and_write": false, 00:07:21.445 "abort": true, 00:07:21.445 "seek_hole": false, 00:07:21.445 "seek_data": false, 00:07:21.445 "copy": true, 00:07:21.445 "nvme_iov_md": false 00:07:21.445 }, 00:07:21.445 "memory_domains": [ 00:07:21.445 { 00:07:21.445 "dma_device_id": "system", 00:07:21.445 "dma_device_type": 1 00:07:21.445 }, 00:07:21.445 { 00:07:21.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.445 "dma_device_type": 2 00:07:21.445 } 00:07:21.445 ], 00:07:21.445 "driver_specific": {} 00:07:21.445 } 00:07:21.445 ] 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.445 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.446 "name": "Existed_Raid", 00:07:21.446 "uuid": "71cb3d1e-b7dd-4ca4-aac2-d9294bfaae8a", 00:07:21.446 "strip_size_kb": 64, 00:07:21.446 "state": "online", 00:07:21.446 "raid_level": "concat", 00:07:21.446 "superblock": true, 00:07:21.446 "num_base_bdevs": 2, 00:07:21.446 "num_base_bdevs_discovered": 2, 00:07:21.446 "num_base_bdevs_operational": 2, 00:07:21.446 "base_bdevs_list": [ 00:07:21.446 { 00:07:21.446 "name": "BaseBdev1", 00:07:21.446 "uuid": "d4cca1f4-a062-4857-876a-4b2a91b0453c", 00:07:21.446 "is_configured": true, 00:07:21.446 "data_offset": 2048, 00:07:21.446 "data_size": 63488 00:07:21.446 }, 00:07:21.446 { 00:07:21.446 "name": "BaseBdev2", 00:07:21.446 "uuid": "232a5e38-bebe-45fb-aa26-e860d03b88dd", 00:07:21.446 "is_configured": true, 00:07:21.446 "data_offset": 2048, 00:07:21.446 "data_size": 63488 00:07:21.446 } 00:07:21.446 ] 00:07:21.446 }' 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.446 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.705 [2024-11-18 10:35:47.509362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.705 "name": "Existed_Raid", 00:07:21.705 "aliases": [ 00:07:21.705 "71cb3d1e-b7dd-4ca4-aac2-d9294bfaae8a" 00:07:21.705 ], 00:07:21.705 "product_name": "Raid Volume", 00:07:21.705 "block_size": 512, 00:07:21.705 "num_blocks": 126976, 00:07:21.705 "uuid": "71cb3d1e-b7dd-4ca4-aac2-d9294bfaae8a", 00:07:21.705 "assigned_rate_limits": { 00:07:21.705 "rw_ios_per_sec": 0, 00:07:21.705 "rw_mbytes_per_sec": 0, 00:07:21.705 "r_mbytes_per_sec": 0, 00:07:21.705 "w_mbytes_per_sec": 0 00:07:21.705 }, 00:07:21.705 "claimed": false, 00:07:21.705 "zoned": false, 00:07:21.705 "supported_io_types": { 00:07:21.705 "read": true, 00:07:21.705 "write": true, 00:07:21.705 "unmap": true, 00:07:21.705 "flush": true, 00:07:21.705 "reset": true, 00:07:21.705 "nvme_admin": false, 00:07:21.705 "nvme_io": false, 00:07:21.705 "nvme_io_md": false, 00:07:21.705 "write_zeroes": true, 00:07:21.705 "zcopy": false, 00:07:21.705 "get_zone_info": false, 00:07:21.705 "zone_management": false, 00:07:21.705 "zone_append": false, 00:07:21.705 "compare": false, 00:07:21.705 "compare_and_write": false, 00:07:21.705 "abort": false, 00:07:21.705 "seek_hole": false, 00:07:21.705 "seek_data": false, 00:07:21.705 "copy": false, 00:07:21.705 "nvme_iov_md": false 00:07:21.705 }, 00:07:21.705 "memory_domains": [ 00:07:21.705 { 00:07:21.705 "dma_device_id": "system", 00:07:21.705 "dma_device_type": 1 00:07:21.705 }, 00:07:21.705 { 00:07:21.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.705 "dma_device_type": 2 00:07:21.705 }, 00:07:21.705 { 00:07:21.705 "dma_device_id": "system", 00:07:21.705 "dma_device_type": 1 00:07:21.705 }, 00:07:21.705 { 00:07:21.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.705 "dma_device_type": 2 00:07:21.705 } 00:07:21.705 ], 00:07:21.705 "driver_specific": { 00:07:21.705 "raid": { 00:07:21.705 "uuid": "71cb3d1e-b7dd-4ca4-aac2-d9294bfaae8a", 00:07:21.705 "strip_size_kb": 64, 00:07:21.705 "state": "online", 00:07:21.705 "raid_level": "concat", 00:07:21.705 "superblock": true, 00:07:21.705 "num_base_bdevs": 2, 00:07:21.705 "num_base_bdevs_discovered": 2, 00:07:21.705 "num_base_bdevs_operational": 2, 00:07:21.705 "base_bdevs_list": [ 00:07:21.705 { 00:07:21.705 "name": "BaseBdev1", 00:07:21.705 "uuid": "d4cca1f4-a062-4857-876a-4b2a91b0453c", 00:07:21.705 "is_configured": true, 00:07:21.705 "data_offset": 2048, 00:07:21.705 "data_size": 63488 00:07:21.705 }, 00:07:21.705 { 00:07:21.705 "name": "BaseBdev2", 00:07:21.705 "uuid": "232a5e38-bebe-45fb-aa26-e860d03b88dd", 00:07:21.705 "is_configured": true, 00:07:21.705 "data_offset": 2048, 00:07:21.705 "data_size": 63488 00:07:21.705 } 00:07:21.705 ] 00:07:21.705 } 00:07:21.705 } 00:07:21.705 }' 00:07:21.705 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.965 BaseBdev2' 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.965 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.966 [2024-11-18 10:35:47.736738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.966 [2024-11-18 10:35:47.736771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.966 [2024-11-18 10:35:47.736820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.966 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.225 "name": "Existed_Raid", 00:07:22.225 "uuid": "71cb3d1e-b7dd-4ca4-aac2-d9294bfaae8a", 00:07:22.225 "strip_size_kb": 64, 00:07:22.225 "state": "offline", 00:07:22.225 "raid_level": "concat", 00:07:22.225 "superblock": true, 00:07:22.225 "num_base_bdevs": 2, 00:07:22.225 "num_base_bdevs_discovered": 1, 00:07:22.225 "num_base_bdevs_operational": 1, 00:07:22.225 "base_bdevs_list": [ 00:07:22.225 { 00:07:22.225 "name": null, 00:07:22.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.225 "is_configured": false, 00:07:22.225 "data_offset": 0, 00:07:22.225 "data_size": 63488 00:07:22.225 }, 00:07:22.225 { 00:07:22.225 "name": "BaseBdev2", 00:07:22.225 "uuid": "232a5e38-bebe-45fb-aa26-e860d03b88dd", 00:07:22.225 "is_configured": true, 00:07:22.225 "data_offset": 2048, 00:07:22.225 "data_size": 63488 00:07:22.225 } 00:07:22.225 ] 00:07:22.225 }' 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.225 10:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.484 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.484 [2024-11-18 10:35:48.355071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.484 [2024-11-18 10:35:48.355144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61849 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61849 ']' 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61849 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61849 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.744 killing process with pid 61849 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61849' 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61849 00:07:22.744 [2024-11-18 10:35:48.556158] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.744 10:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61849 00:07:22.744 [2024-11-18 10:35:48.572428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.124 10:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:24.124 00:07:24.124 real 0m5.038s 00:07:24.124 user 0m7.120s 00:07:24.124 sys 0m0.911s 00:07:24.124 10:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.124 10:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.124 ************************************ 00:07:24.124 END TEST raid_state_function_test_sb 00:07:24.124 ************************************ 00:07:24.124 10:35:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:24.124 10:35:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:24.124 10:35:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.124 10:35:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.124 ************************************ 00:07:24.124 START TEST raid_superblock_test 00:07:24.124 ************************************ 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62101 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62101 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62101 ']' 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.124 10:35:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.124 [2024-11-18 10:35:49.884464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:24.124 [2024-11-18 10:35:49.884610] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62101 ] 00:07:24.384 [2024-11-18 10:35:50.042991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.384 [2024-11-18 10:35:50.173216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.643 [2024-11-18 10:35:50.406422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.643 [2024-11-18 10:35:50.406461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.904 malloc1 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.904 [2024-11-18 10:35:50.751777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.904 [2024-11-18 10:35:50.751847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.904 [2024-11-18 10:35:50.751873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.904 [2024-11-18 10:35:50.751883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.904 [2024-11-18 10:35:50.754362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.904 [2024-11-18 10:35:50.754398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.904 pt1 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.904 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 malloc2 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 [2024-11-18 10:35:50.813216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:25.164 [2024-11-18 10:35:50.813271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.164 [2024-11-18 10:35:50.813296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:25.164 [2024-11-18 10:35:50.813305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.164 [2024-11-18 10:35:50.815727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.164 [2024-11-18 10:35:50.815762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:25.164 pt2 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 [2024-11-18 10:35:50.825254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.164 [2024-11-18 10:35:50.827244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:25.164 [2024-11-18 10:35:50.827404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:25.164 [2024-11-18 10:35:50.827416] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.164 [2024-11-18 10:35:50.827646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.164 [2024-11-18 10:35:50.827798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:25.164 [2024-11-18 10:35:50.827815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:25.164 [2024-11-18 10:35:50.827972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.164 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.164 "name": "raid_bdev1", 00:07:25.164 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:25.164 "strip_size_kb": 64, 00:07:25.164 "state": "online", 00:07:25.164 "raid_level": "concat", 00:07:25.164 "superblock": true, 00:07:25.164 "num_base_bdevs": 2, 00:07:25.164 "num_base_bdevs_discovered": 2, 00:07:25.164 "num_base_bdevs_operational": 2, 00:07:25.164 "base_bdevs_list": [ 00:07:25.164 { 00:07:25.164 "name": "pt1", 00:07:25.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.164 "is_configured": true, 00:07:25.164 "data_offset": 2048, 00:07:25.164 "data_size": 63488 00:07:25.164 }, 00:07:25.164 { 00:07:25.164 "name": "pt2", 00:07:25.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.165 "is_configured": true, 00:07:25.165 "data_offset": 2048, 00:07:25.165 "data_size": 63488 00:07:25.165 } 00:07:25.165 ] 00:07:25.165 }' 00:07:25.165 10:35:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.165 10:35:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.429 [2024-11-18 10:35:51.252683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.429 "name": "raid_bdev1", 00:07:25.429 "aliases": [ 00:07:25.429 "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379" 00:07:25.429 ], 00:07:25.429 "product_name": "Raid Volume", 00:07:25.429 "block_size": 512, 00:07:25.429 "num_blocks": 126976, 00:07:25.429 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:25.429 "assigned_rate_limits": { 00:07:25.429 "rw_ios_per_sec": 0, 00:07:25.429 "rw_mbytes_per_sec": 0, 00:07:25.429 "r_mbytes_per_sec": 0, 00:07:25.429 "w_mbytes_per_sec": 0 00:07:25.429 }, 00:07:25.429 "claimed": false, 00:07:25.429 "zoned": false, 00:07:25.429 "supported_io_types": { 00:07:25.429 "read": true, 00:07:25.429 "write": true, 00:07:25.429 "unmap": true, 00:07:25.429 "flush": true, 00:07:25.429 "reset": true, 00:07:25.429 "nvme_admin": false, 00:07:25.429 "nvme_io": false, 00:07:25.429 "nvme_io_md": false, 00:07:25.429 "write_zeroes": true, 00:07:25.429 "zcopy": false, 00:07:25.429 "get_zone_info": false, 00:07:25.429 "zone_management": false, 00:07:25.429 "zone_append": false, 00:07:25.429 "compare": false, 00:07:25.429 "compare_and_write": false, 00:07:25.429 "abort": false, 00:07:25.429 "seek_hole": false, 00:07:25.429 "seek_data": false, 00:07:25.429 "copy": false, 00:07:25.429 "nvme_iov_md": false 00:07:25.429 }, 00:07:25.429 "memory_domains": [ 00:07:25.429 { 00:07:25.429 "dma_device_id": "system", 00:07:25.429 "dma_device_type": 1 00:07:25.429 }, 00:07:25.429 { 00:07:25.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.429 "dma_device_type": 2 00:07:25.429 }, 00:07:25.429 { 00:07:25.429 "dma_device_id": "system", 00:07:25.429 "dma_device_type": 1 00:07:25.429 }, 00:07:25.429 { 00:07:25.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.429 "dma_device_type": 2 00:07:25.429 } 00:07:25.429 ], 00:07:25.429 "driver_specific": { 00:07:25.429 "raid": { 00:07:25.429 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:25.429 "strip_size_kb": 64, 00:07:25.429 "state": "online", 00:07:25.429 "raid_level": "concat", 00:07:25.429 "superblock": true, 00:07:25.429 "num_base_bdevs": 2, 00:07:25.429 "num_base_bdevs_discovered": 2, 00:07:25.429 "num_base_bdevs_operational": 2, 00:07:25.429 "base_bdevs_list": [ 00:07:25.429 { 00:07:25.429 "name": "pt1", 00:07:25.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.429 "is_configured": true, 00:07:25.429 "data_offset": 2048, 00:07:25.429 "data_size": 63488 00:07:25.429 }, 00:07:25.429 { 00:07:25.429 "name": "pt2", 00:07:25.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.429 "is_configured": true, 00:07:25.429 "data_offset": 2048, 00:07:25.429 "data_size": 63488 00:07:25.429 } 00:07:25.429 ] 00:07:25.429 } 00:07:25.429 } 00:07:25.429 }' 00:07:25.429 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:25.697 pt2' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 [2024-11-18 10:35:51.460341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=04f8cf6e-fcf3-48a0-a5ac-b6fde9897379 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 04f8cf6e-fcf3-48a0-a5ac-b6fde9897379 ']' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 [2024-11-18 10:35:51.503999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.697 [2024-11-18 10:35:51.504023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.697 [2024-11-18 10:35:51.504095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.697 [2024-11-18 10:35:51.504137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.697 [2024-11-18 10:35:51.504149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.956 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 [2024-11-18 10:35:51.627814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:25.956 [2024-11-18 10:35:51.629767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.956 [2024-11-18 10:35:51.629831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:25.957 [2024-11-18 10:35:51.629872] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:25.957 [2024-11-18 10:35:51.629885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.957 [2024-11-18 10:35:51.629894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:25.957 request: 00:07:25.957 { 00:07:25.957 "name": "raid_bdev1", 00:07:25.957 "raid_level": "concat", 00:07:25.957 "base_bdevs": [ 00:07:25.957 "malloc1", 00:07:25.957 "malloc2" 00:07:25.957 ], 00:07:25.957 "strip_size_kb": 64, 00:07:25.957 "superblock": false, 00:07:25.957 "method": "bdev_raid_create", 00:07:25.957 "req_id": 1 00:07:25.957 } 00:07:25.957 Got JSON-RPC error response 00:07:25.957 response: 00:07:25.957 { 00:07:25.957 "code": -17, 00:07:25.957 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:25.957 } 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.957 [2024-11-18 10:35:51.683704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:25.957 [2024-11-18 10:35:51.683749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.957 [2024-11-18 10:35:51.683766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:25.957 [2024-11-18 10:35:51.683778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.957 [2024-11-18 10:35:51.686050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.957 [2024-11-18 10:35:51.686084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:25.957 [2024-11-18 10:35:51.686147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:25.957 [2024-11-18 10:35:51.686220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.957 pt1 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.957 "name": "raid_bdev1", 00:07:25.957 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:25.957 "strip_size_kb": 64, 00:07:25.957 "state": "configuring", 00:07:25.957 "raid_level": "concat", 00:07:25.957 "superblock": true, 00:07:25.957 "num_base_bdevs": 2, 00:07:25.957 "num_base_bdevs_discovered": 1, 00:07:25.957 "num_base_bdevs_operational": 2, 00:07:25.957 "base_bdevs_list": [ 00:07:25.957 { 00:07:25.957 "name": "pt1", 00:07:25.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.957 "is_configured": true, 00:07:25.957 "data_offset": 2048, 00:07:25.957 "data_size": 63488 00:07:25.957 }, 00:07:25.957 { 00:07:25.957 "name": null, 00:07:25.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.957 "is_configured": false, 00:07:25.957 "data_offset": 2048, 00:07:25.957 "data_size": 63488 00:07:25.957 } 00:07:25.957 ] 00:07:25.957 }' 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.957 10:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.525 [2024-11-18 10:35:52.166888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.525 [2024-11-18 10:35:52.166961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.525 [2024-11-18 10:35:52.166981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:26.525 [2024-11-18 10:35:52.166992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.525 [2024-11-18 10:35:52.167416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.525 [2024-11-18 10:35:52.167441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.525 [2024-11-18 10:35:52.167505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:26.525 [2024-11-18 10:35:52.167528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.525 [2024-11-18 10:35:52.167629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.525 [2024-11-18 10:35:52.167647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.525 [2024-11-18 10:35:52.167874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:26.525 [2024-11-18 10:35:52.168024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.525 [2024-11-18 10:35:52.168037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:26.525 [2024-11-18 10:35:52.168166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.525 pt2 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.525 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.526 "name": "raid_bdev1", 00:07:26.526 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:26.526 "strip_size_kb": 64, 00:07:26.526 "state": "online", 00:07:26.526 "raid_level": "concat", 00:07:26.526 "superblock": true, 00:07:26.526 "num_base_bdevs": 2, 00:07:26.526 "num_base_bdevs_discovered": 2, 00:07:26.526 "num_base_bdevs_operational": 2, 00:07:26.526 "base_bdevs_list": [ 00:07:26.526 { 00:07:26.526 "name": "pt1", 00:07:26.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.526 "is_configured": true, 00:07:26.526 "data_offset": 2048, 00:07:26.526 "data_size": 63488 00:07:26.526 }, 00:07:26.526 { 00:07:26.526 "name": "pt2", 00:07:26.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.526 "is_configured": true, 00:07:26.526 "data_offset": 2048, 00:07:26.526 "data_size": 63488 00:07:26.526 } 00:07:26.526 ] 00:07:26.526 }' 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.526 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.785 [2024-11-18 10:35:52.594609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.785 "name": "raid_bdev1", 00:07:26.785 "aliases": [ 00:07:26.785 "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379" 00:07:26.785 ], 00:07:26.785 "product_name": "Raid Volume", 00:07:26.785 "block_size": 512, 00:07:26.785 "num_blocks": 126976, 00:07:26.785 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:26.785 "assigned_rate_limits": { 00:07:26.785 "rw_ios_per_sec": 0, 00:07:26.785 "rw_mbytes_per_sec": 0, 00:07:26.785 "r_mbytes_per_sec": 0, 00:07:26.785 "w_mbytes_per_sec": 0 00:07:26.785 }, 00:07:26.785 "claimed": false, 00:07:26.785 "zoned": false, 00:07:26.785 "supported_io_types": { 00:07:26.785 "read": true, 00:07:26.785 "write": true, 00:07:26.785 "unmap": true, 00:07:26.785 "flush": true, 00:07:26.785 "reset": true, 00:07:26.785 "nvme_admin": false, 00:07:26.785 "nvme_io": false, 00:07:26.785 "nvme_io_md": false, 00:07:26.785 "write_zeroes": true, 00:07:26.785 "zcopy": false, 00:07:26.785 "get_zone_info": false, 00:07:26.785 "zone_management": false, 00:07:26.785 "zone_append": false, 00:07:26.785 "compare": false, 00:07:26.785 "compare_and_write": false, 00:07:26.785 "abort": false, 00:07:26.785 "seek_hole": false, 00:07:26.785 "seek_data": false, 00:07:26.785 "copy": false, 00:07:26.785 "nvme_iov_md": false 00:07:26.785 }, 00:07:26.785 "memory_domains": [ 00:07:26.785 { 00:07:26.785 "dma_device_id": "system", 00:07:26.785 "dma_device_type": 1 00:07:26.785 }, 00:07:26.785 { 00:07:26.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.785 "dma_device_type": 2 00:07:26.785 }, 00:07:26.785 { 00:07:26.785 "dma_device_id": "system", 00:07:26.785 "dma_device_type": 1 00:07:26.785 }, 00:07:26.785 { 00:07:26.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.785 "dma_device_type": 2 00:07:26.785 } 00:07:26.785 ], 00:07:26.785 "driver_specific": { 00:07:26.785 "raid": { 00:07:26.785 "uuid": "04f8cf6e-fcf3-48a0-a5ac-b6fde9897379", 00:07:26.785 "strip_size_kb": 64, 00:07:26.785 "state": "online", 00:07:26.785 "raid_level": "concat", 00:07:26.785 "superblock": true, 00:07:26.785 "num_base_bdevs": 2, 00:07:26.785 "num_base_bdevs_discovered": 2, 00:07:26.785 "num_base_bdevs_operational": 2, 00:07:26.785 "base_bdevs_list": [ 00:07:26.785 { 00:07:26.785 "name": "pt1", 00:07:26.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.785 "is_configured": true, 00:07:26.785 "data_offset": 2048, 00:07:26.785 "data_size": 63488 00:07:26.785 }, 00:07:26.785 { 00:07:26.785 "name": "pt2", 00:07:26.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.785 "is_configured": true, 00:07:26.785 "data_offset": 2048, 00:07:26.785 "data_size": 63488 00:07:26.785 } 00:07:26.785 ] 00:07:26.785 } 00:07:26.785 } 00:07:26.785 }' 00:07:26.785 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.046 pt2' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.046 [2024-11-18 10:35:52.786165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 04f8cf6e-fcf3-48a0-a5ac-b6fde9897379 '!=' 04f8cf6e-fcf3-48a0-a5ac-b6fde9897379 ']' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62101 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62101 ']' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62101 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62101 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.046 killing process with pid 62101 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62101' 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62101 00:07:27.046 [2024-11-18 10:35:52.853073] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.046 [2024-11-18 10:35:52.853192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.046 [2024-11-18 10:35:52.853251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.046 [2024-11-18 10:35:52.853263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:27.046 10:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62101 00:07:27.306 [2024-11-18 10:35:53.072060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.688 10:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:28.689 00:07:28.689 real 0m4.432s 00:07:28.689 user 0m6.056s 00:07:28.689 sys 0m0.815s 00:07:28.689 10:35:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.689 10:35:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.689 ************************************ 00:07:28.689 END TEST raid_superblock_test 00:07:28.689 ************************************ 00:07:28.689 10:35:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:28.689 10:35:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.689 10:35:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.689 10:35:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.689 ************************************ 00:07:28.689 START TEST raid_read_error_test 00:07:28.689 ************************************ 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JGtPdAtjTH 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62307 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62307 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62307 ']' 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.689 10:35:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.689 [2024-11-18 10:35:54.402604] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:28.689 [2024-11-18 10:35:54.402735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62307 ] 00:07:28.948 [2024-11-18 10:35:54.576475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.948 [2024-11-18 10:35:54.707855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.208 [2024-11-18 10:35:54.940696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.208 [2024-11-18 10:35:54.940734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 BaseBdev1_malloc 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 true 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 [2024-11-18 10:35:55.291299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.468 [2024-11-18 10:35:55.291353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.468 [2024-11-18 10:35:55.291374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:29.468 [2024-11-18 10:35:55.291386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.468 [2024-11-18 10:35:55.293634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.468 [2024-11-18 10:35:55.293671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.468 BaseBdev1 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 BaseBdev2_malloc 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.468 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.728 true 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.728 [2024-11-18 10:35:55.362486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.728 [2024-11-18 10:35:55.362538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.728 [2024-11-18 10:35:55.362554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:29.728 [2024-11-18 10:35:55.362565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.728 [2024-11-18 10:35:55.364899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.728 [2024-11-18 10:35:55.364936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.728 BaseBdev2 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.728 [2024-11-18 10:35:55.374536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.728 [2024-11-18 10:35:55.376622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.728 [2024-11-18 10:35:55.376806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:29.728 [2024-11-18 10:35:55.376822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.728 [2024-11-18 10:35:55.377030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:29.728 [2024-11-18 10:35:55.377221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:29.728 [2024-11-18 10:35:55.377241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:29.728 [2024-11-18 10:35:55.377391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.728 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.729 "name": "raid_bdev1", 00:07:29.729 "uuid": "2e9b5ac2-f2e0-4aa7-92a9-1a974b435c35", 00:07:29.729 "strip_size_kb": 64, 00:07:29.729 "state": "online", 00:07:29.729 "raid_level": "concat", 00:07:29.729 "superblock": true, 00:07:29.729 "num_base_bdevs": 2, 00:07:29.729 "num_base_bdevs_discovered": 2, 00:07:29.729 "num_base_bdevs_operational": 2, 00:07:29.729 "base_bdevs_list": [ 00:07:29.729 { 00:07:29.729 "name": "BaseBdev1", 00:07:29.729 "uuid": "b2df7a04-6850-5ca2-8597-5e14041f5b8b", 00:07:29.729 "is_configured": true, 00:07:29.729 "data_offset": 2048, 00:07:29.729 "data_size": 63488 00:07:29.729 }, 00:07:29.729 { 00:07:29.729 "name": "BaseBdev2", 00:07:29.729 "uuid": "c5f201df-9bba-540e-8644-83eb9c478816", 00:07:29.729 "is_configured": true, 00:07:29.729 "data_offset": 2048, 00:07:29.729 "data_size": 63488 00:07:29.729 } 00:07:29.729 ] 00:07:29.729 }' 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.729 10:35:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.989 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:29.989 10:35:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.249 [2024-11-18 10:35:55.946780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.188 "name": "raid_bdev1", 00:07:31.188 "uuid": "2e9b5ac2-f2e0-4aa7-92a9-1a974b435c35", 00:07:31.188 "strip_size_kb": 64, 00:07:31.188 "state": "online", 00:07:31.188 "raid_level": "concat", 00:07:31.188 "superblock": true, 00:07:31.188 "num_base_bdevs": 2, 00:07:31.188 "num_base_bdevs_discovered": 2, 00:07:31.188 "num_base_bdevs_operational": 2, 00:07:31.188 "base_bdevs_list": [ 00:07:31.188 { 00:07:31.188 "name": "BaseBdev1", 00:07:31.188 "uuid": "b2df7a04-6850-5ca2-8597-5e14041f5b8b", 00:07:31.188 "is_configured": true, 00:07:31.188 "data_offset": 2048, 00:07:31.188 "data_size": 63488 00:07:31.188 }, 00:07:31.188 { 00:07:31.188 "name": "BaseBdev2", 00:07:31.188 "uuid": "c5f201df-9bba-540e-8644-83eb9c478816", 00:07:31.188 "is_configured": true, 00:07:31.188 "data_offset": 2048, 00:07:31.188 "data_size": 63488 00:07:31.188 } 00:07:31.188 ] 00:07:31.188 }' 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.188 10:35:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.448 [2024-11-18 10:35:57.322718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.448 [2024-11-18 10:35:57.322772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.448 [2024-11-18 10:35:57.325450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.448 [2024-11-18 10:35:57.325504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.448 [2024-11-18 10:35:57.325539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.448 [2024-11-18 10:35:57.325555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:31.448 { 00:07:31.448 "results": [ 00:07:31.448 { 00:07:31.448 "job": "raid_bdev1", 00:07:31.448 "core_mask": "0x1", 00:07:31.448 "workload": "randrw", 00:07:31.448 "percentage": 50, 00:07:31.448 "status": "finished", 00:07:31.448 "queue_depth": 1, 00:07:31.448 "io_size": 131072, 00:07:31.448 "runtime": 1.376791, 00:07:31.448 "iops": 14873.717216338573, 00:07:31.448 "mibps": 1859.2146520423216, 00:07:31.448 "io_failed": 1, 00:07:31.448 "io_timeout": 0, 00:07:31.448 "avg_latency_us": 94.47704763490813, 00:07:31.448 "min_latency_us": 24.482096069868994, 00:07:31.448 "max_latency_us": 1473.844541484716 00:07:31.448 } 00:07:31.448 ], 00:07:31.448 "core_count": 1 00:07:31.448 } 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62307 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62307 ']' 00:07:31.448 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62307 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62307 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.708 killing process with pid 62307 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62307' 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62307 00:07:31.708 [2024-11-18 10:35:57.373927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.708 10:35:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62307 00:07:31.708 [2024-11-18 10:35:57.517921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JGtPdAtjTH 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:33.091 00:07:33.091 real 0m4.426s 00:07:33.091 user 0m5.227s 00:07:33.091 sys 0m0.620s 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.091 10:35:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.091 ************************************ 00:07:33.091 END TEST raid_read_error_test 00:07:33.091 ************************************ 00:07:33.091 10:35:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:33.091 10:35:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:33.091 10:35:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.091 10:35:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.091 ************************************ 00:07:33.091 START TEST raid_write_error_test 00:07:33.091 ************************************ 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8ZAR1O6MwS 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62447 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62447 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62447 ']' 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.091 10:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.091 [2024-11-18 10:35:58.898046] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:33.091 [2024-11-18 10:35:58.898165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62447 ] 00:07:33.350 [2024-11-18 10:35:59.074097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.350 [2024-11-18 10:35:59.200756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.610 [2024-11-18 10:35:59.431256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.610 [2024-11-18 10:35:59.431306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.870 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.870 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.870 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.870 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.870 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.870 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 BaseBdev1_malloc 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 true 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 [2024-11-18 10:35:59.779461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:34.129 [2024-11-18 10:35:59.779523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.129 [2024-11-18 10:35:59.779543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:34.129 [2024-11-18 10:35:59.779555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.129 [2024-11-18 10:35:59.781878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.129 [2024-11-18 10:35:59.781917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:34.129 BaseBdev1 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:34.129 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.130 BaseBdev2_malloc 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.130 true 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.130 [2024-11-18 10:35:59.852374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:34.130 [2024-11-18 10:35:59.852429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.130 [2024-11-18 10:35:59.852445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:34.130 [2024-11-18 10:35:59.852456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.130 [2024-11-18 10:35:59.854753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.130 [2024-11-18 10:35:59.854791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:34.130 BaseBdev2 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.130 [2024-11-18 10:35:59.864448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.130 [2024-11-18 10:35:59.866524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.130 [2024-11-18 10:35:59.866730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.130 [2024-11-18 10:35:59.866745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.130 [2024-11-18 10:35:59.866989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:34.130 [2024-11-18 10:35:59.867196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.130 [2024-11-18 10:35:59.867215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:34.130 [2024-11-18 10:35:59.867366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.130 "name": "raid_bdev1", 00:07:34.130 "uuid": "472a189b-ef22-4797-b483-cbbcfff8abc1", 00:07:34.130 "strip_size_kb": 64, 00:07:34.130 "state": "online", 00:07:34.130 "raid_level": "concat", 00:07:34.130 "superblock": true, 00:07:34.130 "num_base_bdevs": 2, 00:07:34.130 "num_base_bdevs_discovered": 2, 00:07:34.130 "num_base_bdevs_operational": 2, 00:07:34.130 "base_bdevs_list": [ 00:07:34.130 { 00:07:34.130 "name": "BaseBdev1", 00:07:34.130 "uuid": "a42021ae-1961-5b7c-aedd-007f48b20f69", 00:07:34.130 "is_configured": true, 00:07:34.130 "data_offset": 2048, 00:07:34.130 "data_size": 63488 00:07:34.130 }, 00:07:34.130 { 00:07:34.130 "name": "BaseBdev2", 00:07:34.130 "uuid": "fefdfa60-f176-530d-a9f5-30bd29aef54b", 00:07:34.130 "is_configured": true, 00:07:34.130 "data_offset": 2048, 00:07:34.130 "data_size": 63488 00:07:34.130 } 00:07:34.130 ] 00:07:34.130 }' 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.130 10:35:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.699 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:34.699 10:36:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:34.699 [2024-11-18 10:36:00.400738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.670 "name": "raid_bdev1", 00:07:35.670 "uuid": "472a189b-ef22-4797-b483-cbbcfff8abc1", 00:07:35.670 "strip_size_kb": 64, 00:07:35.670 "state": "online", 00:07:35.670 "raid_level": "concat", 00:07:35.670 "superblock": true, 00:07:35.670 "num_base_bdevs": 2, 00:07:35.670 "num_base_bdevs_discovered": 2, 00:07:35.670 "num_base_bdevs_operational": 2, 00:07:35.670 "base_bdevs_list": [ 00:07:35.670 { 00:07:35.670 "name": "BaseBdev1", 00:07:35.670 "uuid": "a42021ae-1961-5b7c-aedd-007f48b20f69", 00:07:35.670 "is_configured": true, 00:07:35.670 "data_offset": 2048, 00:07:35.670 "data_size": 63488 00:07:35.670 }, 00:07:35.670 { 00:07:35.670 "name": "BaseBdev2", 00:07:35.670 "uuid": "fefdfa60-f176-530d-a9f5-30bd29aef54b", 00:07:35.670 "is_configured": true, 00:07:35.670 "data_offset": 2048, 00:07:35.670 "data_size": 63488 00:07:35.670 } 00:07:35.670 ] 00:07:35.670 }' 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.670 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.930 [2024-11-18 10:36:01.773126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.930 [2024-11-18 10:36:01.773193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.930 [2024-11-18 10:36:01.775846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.930 [2024-11-18 10:36:01.775900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.930 [2024-11-18 10:36:01.775935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.930 [2024-11-18 10:36:01.775956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:35.930 { 00:07:35.930 "results": [ 00:07:35.930 { 00:07:35.930 "job": "raid_bdev1", 00:07:35.930 "core_mask": "0x1", 00:07:35.930 "workload": "randrw", 00:07:35.930 "percentage": 50, 00:07:35.930 "status": "finished", 00:07:35.930 "queue_depth": 1, 00:07:35.930 "io_size": 131072, 00:07:35.930 "runtime": 1.373247, 00:07:35.930 "iops": 15232.510975811343, 00:07:35.930 "mibps": 1904.0638719764179, 00:07:35.930 "io_failed": 1, 00:07:35.930 "io_timeout": 0, 00:07:35.930 "avg_latency_us": 92.2324155283083, 00:07:35.930 "min_latency_us": 24.034934497816593, 00:07:35.930 "max_latency_us": 1380.8349344978167 00:07:35.930 } 00:07:35.930 ], 00:07:35.930 "core_count": 1 00:07:35.930 } 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62447 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62447 ']' 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62447 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.930 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62447 00:07:36.190 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.190 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.190 killing process with pid 62447 00:07:36.190 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62447' 00:07:36.190 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62447 00:07:36.190 [2024-11-18 10:36:01.824388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.190 10:36:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62447 00:07:36.190 [2024-11-18 10:36:01.969412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8ZAR1O6MwS 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:37.571 00:07:37.571 real 0m4.393s 00:07:37.571 user 0m5.123s 00:07:37.571 sys 0m0.664s 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.571 10:36:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.571 ************************************ 00:07:37.571 END TEST raid_write_error_test 00:07:37.571 ************************************ 00:07:37.571 10:36:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:37.571 10:36:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:37.571 10:36:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:37.571 10:36:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.571 10:36:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.571 ************************************ 00:07:37.571 START TEST raid_state_function_test 00:07:37.571 ************************************ 00:07:37.571 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:37.571 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62595 00:07:37.572 Process raid pid: 62595 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62595' 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62595 00:07:37.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62595 ']' 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.572 10:36:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.572 [2024-11-18 10:36:03.354814] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:37.572 [2024-11-18 10:36:03.355018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.832 [2024-11-18 10:36:03.531048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.832 [2024-11-18 10:36:03.666722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.092 [2024-11-18 10:36:03.903402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.092 [2024-11-18 10:36:03.903547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.351 [2024-11-18 10:36:04.184881] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.351 [2024-11-18 10:36:04.184940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.351 [2024-11-18 10:36:04.184951] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.351 [2024-11-18 10:36:04.184960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.351 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.352 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.352 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.352 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.352 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.352 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.611 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.611 "name": "Existed_Raid", 00:07:38.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.611 "strip_size_kb": 0, 00:07:38.611 "state": "configuring", 00:07:38.611 "raid_level": "raid1", 00:07:38.611 "superblock": false, 00:07:38.612 "num_base_bdevs": 2, 00:07:38.612 "num_base_bdevs_discovered": 0, 00:07:38.612 "num_base_bdevs_operational": 2, 00:07:38.612 "base_bdevs_list": [ 00:07:38.612 { 00:07:38.612 "name": "BaseBdev1", 00:07:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.612 "is_configured": false, 00:07:38.612 "data_offset": 0, 00:07:38.612 "data_size": 0 00:07:38.612 }, 00:07:38.612 { 00:07:38.612 "name": "BaseBdev2", 00:07:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.612 "is_configured": false, 00:07:38.612 "data_offset": 0, 00:07:38.612 "data_size": 0 00:07:38.612 } 00:07:38.612 ] 00:07:38.612 }' 00:07:38.612 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.612 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 [2024-11-18 10:36:04.620116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.872 [2024-11-18 10:36:04.620210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 [2024-11-18 10:36:04.632089] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.872 [2024-11-18 10:36:04.632177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.872 [2024-11-18 10:36:04.632205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.872 [2024-11-18 10:36:04.632232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 [2024-11-18 10:36:04.685118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.872 BaseBdev1 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 [ 00:07:38.872 { 00:07:38.872 "name": "BaseBdev1", 00:07:38.872 "aliases": [ 00:07:38.872 "190f52ac-1923-4609-94d7-b4bc88f2df5c" 00:07:38.872 ], 00:07:38.872 "product_name": "Malloc disk", 00:07:38.872 "block_size": 512, 00:07:38.872 "num_blocks": 65536, 00:07:38.872 "uuid": "190f52ac-1923-4609-94d7-b4bc88f2df5c", 00:07:38.872 "assigned_rate_limits": { 00:07:38.872 "rw_ios_per_sec": 0, 00:07:38.872 "rw_mbytes_per_sec": 0, 00:07:38.872 "r_mbytes_per_sec": 0, 00:07:38.872 "w_mbytes_per_sec": 0 00:07:38.872 }, 00:07:38.872 "claimed": true, 00:07:38.872 "claim_type": "exclusive_write", 00:07:38.872 "zoned": false, 00:07:38.872 "supported_io_types": { 00:07:38.872 "read": true, 00:07:38.872 "write": true, 00:07:38.872 "unmap": true, 00:07:38.872 "flush": true, 00:07:38.872 "reset": true, 00:07:38.872 "nvme_admin": false, 00:07:38.872 "nvme_io": false, 00:07:38.872 "nvme_io_md": false, 00:07:38.872 "write_zeroes": true, 00:07:38.872 "zcopy": true, 00:07:38.872 "get_zone_info": false, 00:07:38.872 "zone_management": false, 00:07:38.872 "zone_append": false, 00:07:38.872 "compare": false, 00:07:38.872 "compare_and_write": false, 00:07:38.872 "abort": true, 00:07:38.872 "seek_hole": false, 00:07:38.872 "seek_data": false, 00:07:38.872 "copy": true, 00:07:38.872 "nvme_iov_md": false 00:07:38.872 }, 00:07:38.872 "memory_domains": [ 00:07:38.872 { 00:07:38.872 "dma_device_id": "system", 00:07:38.872 "dma_device_type": 1 00:07:38.872 }, 00:07:38.872 { 00:07:38.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.872 "dma_device_type": 2 00:07:38.872 } 00:07:38.872 ], 00:07:38.872 "driver_specific": {} 00:07:38.872 } 00:07:38.872 ] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.872 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.132 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.132 "name": "Existed_Raid", 00:07:39.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.132 "strip_size_kb": 0, 00:07:39.132 "state": "configuring", 00:07:39.132 "raid_level": "raid1", 00:07:39.132 "superblock": false, 00:07:39.132 "num_base_bdevs": 2, 00:07:39.132 "num_base_bdevs_discovered": 1, 00:07:39.132 "num_base_bdevs_operational": 2, 00:07:39.132 "base_bdevs_list": [ 00:07:39.132 { 00:07:39.132 "name": "BaseBdev1", 00:07:39.132 "uuid": "190f52ac-1923-4609-94d7-b4bc88f2df5c", 00:07:39.132 "is_configured": true, 00:07:39.132 "data_offset": 0, 00:07:39.132 "data_size": 65536 00:07:39.132 }, 00:07:39.132 { 00:07:39.132 "name": "BaseBdev2", 00:07:39.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.132 "is_configured": false, 00:07:39.132 "data_offset": 0, 00:07:39.132 "data_size": 0 00:07:39.132 } 00:07:39.132 ] 00:07:39.132 }' 00:07:39.132 10:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.132 10:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.392 [2024-11-18 10:36:05.124376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.392 [2024-11-18 10:36:05.124464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.392 [2024-11-18 10:36:05.136426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.392 [2024-11-18 10:36:05.138416] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.392 [2024-11-18 10:36:05.138453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.392 "name": "Existed_Raid", 00:07:39.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.392 "strip_size_kb": 0, 00:07:39.392 "state": "configuring", 00:07:39.392 "raid_level": "raid1", 00:07:39.392 "superblock": false, 00:07:39.392 "num_base_bdevs": 2, 00:07:39.392 "num_base_bdevs_discovered": 1, 00:07:39.392 "num_base_bdevs_operational": 2, 00:07:39.392 "base_bdevs_list": [ 00:07:39.392 { 00:07:39.392 "name": "BaseBdev1", 00:07:39.392 "uuid": "190f52ac-1923-4609-94d7-b4bc88f2df5c", 00:07:39.392 "is_configured": true, 00:07:39.392 "data_offset": 0, 00:07:39.392 "data_size": 65536 00:07:39.392 }, 00:07:39.392 { 00:07:39.392 "name": "BaseBdev2", 00:07:39.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.392 "is_configured": false, 00:07:39.392 "data_offset": 0, 00:07:39.392 "data_size": 0 00:07:39.392 } 00:07:39.392 ] 00:07:39.392 }' 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.392 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.961 [2024-11-18 10:36:05.646790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.961 [2024-11-18 10:36:05.646933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.961 [2024-11-18 10:36:05.646968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:39.961 [2024-11-18 10:36:05.647312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.961 [2024-11-18 10:36:05.647537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.961 [2024-11-18 10:36:05.647587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:39.961 [2024-11-18 10:36:05.647887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.961 BaseBdev2 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.961 [ 00:07:39.961 { 00:07:39.961 "name": "BaseBdev2", 00:07:39.961 "aliases": [ 00:07:39.961 "9e00fa87-4c7a-414b-90fd-98eb15cfd209" 00:07:39.961 ], 00:07:39.961 "product_name": "Malloc disk", 00:07:39.961 "block_size": 512, 00:07:39.961 "num_blocks": 65536, 00:07:39.961 "uuid": "9e00fa87-4c7a-414b-90fd-98eb15cfd209", 00:07:39.961 "assigned_rate_limits": { 00:07:39.961 "rw_ios_per_sec": 0, 00:07:39.961 "rw_mbytes_per_sec": 0, 00:07:39.961 "r_mbytes_per_sec": 0, 00:07:39.961 "w_mbytes_per_sec": 0 00:07:39.961 }, 00:07:39.961 "claimed": true, 00:07:39.961 "claim_type": "exclusive_write", 00:07:39.961 "zoned": false, 00:07:39.961 "supported_io_types": { 00:07:39.961 "read": true, 00:07:39.961 "write": true, 00:07:39.961 "unmap": true, 00:07:39.961 "flush": true, 00:07:39.961 "reset": true, 00:07:39.961 "nvme_admin": false, 00:07:39.961 "nvme_io": false, 00:07:39.961 "nvme_io_md": false, 00:07:39.961 "write_zeroes": true, 00:07:39.961 "zcopy": true, 00:07:39.961 "get_zone_info": false, 00:07:39.961 "zone_management": false, 00:07:39.961 "zone_append": false, 00:07:39.961 "compare": false, 00:07:39.961 "compare_and_write": false, 00:07:39.961 "abort": true, 00:07:39.961 "seek_hole": false, 00:07:39.961 "seek_data": false, 00:07:39.961 "copy": true, 00:07:39.961 "nvme_iov_md": false 00:07:39.961 }, 00:07:39.961 "memory_domains": [ 00:07:39.961 { 00:07:39.961 "dma_device_id": "system", 00:07:39.961 "dma_device_type": 1 00:07:39.961 }, 00:07:39.961 { 00:07:39.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.961 "dma_device_type": 2 00:07:39.961 } 00:07:39.961 ], 00:07:39.961 "driver_specific": {} 00:07:39.961 } 00:07:39.961 ] 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.961 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.962 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.962 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.962 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.962 "name": "Existed_Raid", 00:07:39.962 "uuid": "d314a136-2fae-4447-acda-bcaed05f870e", 00:07:39.962 "strip_size_kb": 0, 00:07:39.962 "state": "online", 00:07:39.962 "raid_level": "raid1", 00:07:39.962 "superblock": false, 00:07:39.962 "num_base_bdevs": 2, 00:07:39.962 "num_base_bdevs_discovered": 2, 00:07:39.962 "num_base_bdevs_operational": 2, 00:07:39.962 "base_bdevs_list": [ 00:07:39.962 { 00:07:39.962 "name": "BaseBdev1", 00:07:39.962 "uuid": "190f52ac-1923-4609-94d7-b4bc88f2df5c", 00:07:39.962 "is_configured": true, 00:07:39.962 "data_offset": 0, 00:07:39.962 "data_size": 65536 00:07:39.962 }, 00:07:39.962 { 00:07:39.962 "name": "BaseBdev2", 00:07:39.962 "uuid": "9e00fa87-4c7a-414b-90fd-98eb15cfd209", 00:07:39.962 "is_configured": true, 00:07:39.962 "data_offset": 0, 00:07:39.962 "data_size": 65536 00:07:39.962 } 00:07:39.962 ] 00:07:39.962 }' 00:07:39.962 10:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.962 10:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.530 [2024-11-18 10:36:06.174146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.530 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.530 "name": "Existed_Raid", 00:07:40.530 "aliases": [ 00:07:40.530 "d314a136-2fae-4447-acda-bcaed05f870e" 00:07:40.530 ], 00:07:40.530 "product_name": "Raid Volume", 00:07:40.530 "block_size": 512, 00:07:40.530 "num_blocks": 65536, 00:07:40.530 "uuid": "d314a136-2fae-4447-acda-bcaed05f870e", 00:07:40.530 "assigned_rate_limits": { 00:07:40.530 "rw_ios_per_sec": 0, 00:07:40.530 "rw_mbytes_per_sec": 0, 00:07:40.530 "r_mbytes_per_sec": 0, 00:07:40.530 "w_mbytes_per_sec": 0 00:07:40.530 }, 00:07:40.530 "claimed": false, 00:07:40.530 "zoned": false, 00:07:40.530 "supported_io_types": { 00:07:40.530 "read": true, 00:07:40.530 "write": true, 00:07:40.530 "unmap": false, 00:07:40.530 "flush": false, 00:07:40.530 "reset": true, 00:07:40.530 "nvme_admin": false, 00:07:40.530 "nvme_io": false, 00:07:40.530 "nvme_io_md": false, 00:07:40.530 "write_zeroes": true, 00:07:40.530 "zcopy": false, 00:07:40.530 "get_zone_info": false, 00:07:40.530 "zone_management": false, 00:07:40.530 "zone_append": false, 00:07:40.530 "compare": false, 00:07:40.530 "compare_and_write": false, 00:07:40.530 "abort": false, 00:07:40.530 "seek_hole": false, 00:07:40.530 "seek_data": false, 00:07:40.530 "copy": false, 00:07:40.530 "nvme_iov_md": false 00:07:40.530 }, 00:07:40.530 "memory_domains": [ 00:07:40.530 { 00:07:40.530 "dma_device_id": "system", 00:07:40.530 "dma_device_type": 1 00:07:40.530 }, 00:07:40.530 { 00:07:40.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.530 "dma_device_type": 2 00:07:40.530 }, 00:07:40.530 { 00:07:40.530 "dma_device_id": "system", 00:07:40.530 "dma_device_type": 1 00:07:40.530 }, 00:07:40.530 { 00:07:40.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.530 "dma_device_type": 2 00:07:40.530 } 00:07:40.530 ], 00:07:40.530 "driver_specific": { 00:07:40.530 "raid": { 00:07:40.530 "uuid": "d314a136-2fae-4447-acda-bcaed05f870e", 00:07:40.530 "strip_size_kb": 0, 00:07:40.530 "state": "online", 00:07:40.530 "raid_level": "raid1", 00:07:40.530 "superblock": false, 00:07:40.530 "num_base_bdevs": 2, 00:07:40.530 "num_base_bdevs_discovered": 2, 00:07:40.530 "num_base_bdevs_operational": 2, 00:07:40.530 "base_bdevs_list": [ 00:07:40.530 { 00:07:40.530 "name": "BaseBdev1", 00:07:40.530 "uuid": "190f52ac-1923-4609-94d7-b4bc88f2df5c", 00:07:40.530 "is_configured": true, 00:07:40.530 "data_offset": 0, 00:07:40.530 "data_size": 65536 00:07:40.530 }, 00:07:40.530 { 00:07:40.530 "name": "BaseBdev2", 00:07:40.530 "uuid": "9e00fa87-4c7a-414b-90fd-98eb15cfd209", 00:07:40.530 "is_configured": true, 00:07:40.530 "data_offset": 0, 00:07:40.531 "data_size": 65536 00:07:40.531 } 00:07:40.531 ] 00:07:40.531 } 00:07:40.531 } 00:07:40.531 }' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.531 BaseBdev2' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.531 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.531 [2024-11-18 10:36:06.409546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.790 "name": "Existed_Raid", 00:07:40.790 "uuid": "d314a136-2fae-4447-acda-bcaed05f870e", 00:07:40.790 "strip_size_kb": 0, 00:07:40.790 "state": "online", 00:07:40.790 "raid_level": "raid1", 00:07:40.790 "superblock": false, 00:07:40.790 "num_base_bdevs": 2, 00:07:40.790 "num_base_bdevs_discovered": 1, 00:07:40.790 "num_base_bdevs_operational": 1, 00:07:40.790 "base_bdevs_list": [ 00:07:40.790 { 00:07:40.790 "name": null, 00:07:40.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.790 "is_configured": false, 00:07:40.790 "data_offset": 0, 00:07:40.790 "data_size": 65536 00:07:40.790 }, 00:07:40.790 { 00:07:40.790 "name": "BaseBdev2", 00:07:40.790 "uuid": "9e00fa87-4c7a-414b-90fd-98eb15cfd209", 00:07:40.790 "is_configured": true, 00:07:40.790 "data_offset": 0, 00:07:40.790 "data_size": 65536 00:07:40.790 } 00:07:40.790 ] 00:07:40.790 }' 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.790 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.360 10:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.360 [2024-11-18 10:36:06.991662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.360 [2024-11-18 10:36:06.991771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.360 [2024-11-18 10:36:07.090658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.360 [2024-11-18 10:36:07.090715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.360 [2024-11-18 10:36:07.090728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62595 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62595 ']' 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62595 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:41.360 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.361 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62595 00:07:41.361 killing process with pid 62595 00:07:41.361 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.361 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.361 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62595' 00:07:41.361 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62595 00:07:41.361 [2024-11-18 10:36:07.172396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.361 10:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62595 00:07:41.361 [2024-11-18 10:36:07.190068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:42.743 00:07:42.743 real 0m5.083s 00:07:42.743 user 0m7.237s 00:07:42.743 sys 0m0.880s 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.743 ************************************ 00:07:42.743 END TEST raid_state_function_test 00:07:42.743 ************************************ 00:07:42.743 10:36:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:42.743 10:36:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.743 10:36:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.743 10:36:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.743 ************************************ 00:07:42.743 START TEST raid_state_function_test_sb 00:07:42.743 ************************************ 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62844 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.743 Process raid pid: 62844 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62844' 00:07:42.743 10:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62844 00:07:42.744 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62844 ']' 00:07:42.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.744 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.744 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.744 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.744 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.744 10:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.744 [2024-11-18 10:36:08.513362] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:42.744 [2024-11-18 10:36:08.513589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.003 [2024-11-18 10:36:08.690905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.003 [2024-11-18 10:36:08.829421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.262 [2024-11-18 10:36:09.067934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.262 [2024-11-18 10:36:09.068090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.529 [2024-11-18 10:36:09.337927] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.529 [2024-11-18 10:36:09.338038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.529 [2024-11-18 10:36:09.338053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.529 [2024-11-18 10:36:09.338063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.529 "name": "Existed_Raid", 00:07:43.529 "uuid": "f08505da-8960-40d0-8d73-524fc9edbbe8", 00:07:43.529 "strip_size_kb": 0, 00:07:43.529 "state": "configuring", 00:07:43.529 "raid_level": "raid1", 00:07:43.529 "superblock": true, 00:07:43.529 "num_base_bdevs": 2, 00:07:43.529 "num_base_bdevs_discovered": 0, 00:07:43.529 "num_base_bdevs_operational": 2, 00:07:43.529 "base_bdevs_list": [ 00:07:43.529 { 00:07:43.529 "name": "BaseBdev1", 00:07:43.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.529 "is_configured": false, 00:07:43.529 "data_offset": 0, 00:07:43.529 "data_size": 0 00:07:43.529 }, 00:07:43.529 { 00:07:43.529 "name": "BaseBdev2", 00:07:43.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.529 "is_configured": false, 00:07:43.529 "data_offset": 0, 00:07:43.529 "data_size": 0 00:07:43.529 } 00:07:43.529 ] 00:07:43.529 }' 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.529 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.112 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.113 [2024-11-18 10:36:09.769077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.113 [2024-11-18 10:36:09.769111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.113 [2024-11-18 10:36:09.777069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.113 [2024-11-18 10:36:09.777110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.113 [2024-11-18 10:36:09.777119] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.113 [2024-11-18 10:36:09.777131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.113 [2024-11-18 10:36:09.825988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.113 BaseBdev1 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.113 [ 00:07:44.113 { 00:07:44.113 "name": "BaseBdev1", 00:07:44.113 "aliases": [ 00:07:44.113 "323f96b2-c516-46fb-b28c-9a6acb0f554a" 00:07:44.113 ], 00:07:44.113 "product_name": "Malloc disk", 00:07:44.113 "block_size": 512, 00:07:44.113 "num_blocks": 65536, 00:07:44.113 "uuid": "323f96b2-c516-46fb-b28c-9a6acb0f554a", 00:07:44.113 "assigned_rate_limits": { 00:07:44.113 "rw_ios_per_sec": 0, 00:07:44.113 "rw_mbytes_per_sec": 0, 00:07:44.113 "r_mbytes_per_sec": 0, 00:07:44.113 "w_mbytes_per_sec": 0 00:07:44.113 }, 00:07:44.113 "claimed": true, 00:07:44.113 "claim_type": "exclusive_write", 00:07:44.113 "zoned": false, 00:07:44.113 "supported_io_types": { 00:07:44.113 "read": true, 00:07:44.113 "write": true, 00:07:44.113 "unmap": true, 00:07:44.113 "flush": true, 00:07:44.113 "reset": true, 00:07:44.113 "nvme_admin": false, 00:07:44.113 "nvme_io": false, 00:07:44.113 "nvme_io_md": false, 00:07:44.113 "write_zeroes": true, 00:07:44.113 "zcopy": true, 00:07:44.113 "get_zone_info": false, 00:07:44.113 "zone_management": false, 00:07:44.113 "zone_append": false, 00:07:44.113 "compare": false, 00:07:44.113 "compare_and_write": false, 00:07:44.113 "abort": true, 00:07:44.113 "seek_hole": false, 00:07:44.113 "seek_data": false, 00:07:44.113 "copy": true, 00:07:44.113 "nvme_iov_md": false 00:07:44.113 }, 00:07:44.113 "memory_domains": [ 00:07:44.113 { 00:07:44.113 "dma_device_id": "system", 00:07:44.113 "dma_device_type": 1 00:07:44.113 }, 00:07:44.113 { 00:07:44.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.113 "dma_device_type": 2 00:07:44.113 } 00:07:44.113 ], 00:07:44.113 "driver_specific": {} 00:07:44.113 } 00:07:44.113 ] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.113 "name": "Existed_Raid", 00:07:44.113 "uuid": "a4a3e16a-83e9-48e1-8f31-e0279dddba21", 00:07:44.113 "strip_size_kb": 0, 00:07:44.113 "state": "configuring", 00:07:44.113 "raid_level": "raid1", 00:07:44.113 "superblock": true, 00:07:44.113 "num_base_bdevs": 2, 00:07:44.113 "num_base_bdevs_discovered": 1, 00:07:44.113 "num_base_bdevs_operational": 2, 00:07:44.113 "base_bdevs_list": [ 00:07:44.113 { 00:07:44.113 "name": "BaseBdev1", 00:07:44.113 "uuid": "323f96b2-c516-46fb-b28c-9a6acb0f554a", 00:07:44.113 "is_configured": true, 00:07:44.113 "data_offset": 2048, 00:07:44.113 "data_size": 63488 00:07:44.113 }, 00:07:44.113 { 00:07:44.113 "name": "BaseBdev2", 00:07:44.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.113 "is_configured": false, 00:07:44.113 "data_offset": 0, 00:07:44.113 "data_size": 0 00:07:44.113 } 00:07:44.113 ] 00:07:44.113 }' 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.113 10:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.682 [2024-11-18 10:36:10.297218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.682 [2024-11-18 10:36:10.297258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.682 [2024-11-18 10:36:10.309270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.682 [2024-11-18 10:36:10.311330] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.682 [2024-11-18 10:36:10.311371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.682 "name": "Existed_Raid", 00:07:44.682 "uuid": "61bb4829-333f-49eb-b545-9874d6960a36", 00:07:44.682 "strip_size_kb": 0, 00:07:44.682 "state": "configuring", 00:07:44.682 "raid_level": "raid1", 00:07:44.682 "superblock": true, 00:07:44.682 "num_base_bdevs": 2, 00:07:44.682 "num_base_bdevs_discovered": 1, 00:07:44.682 "num_base_bdevs_operational": 2, 00:07:44.682 "base_bdevs_list": [ 00:07:44.682 { 00:07:44.682 "name": "BaseBdev1", 00:07:44.682 "uuid": "323f96b2-c516-46fb-b28c-9a6acb0f554a", 00:07:44.682 "is_configured": true, 00:07:44.682 "data_offset": 2048, 00:07:44.682 "data_size": 63488 00:07:44.682 }, 00:07:44.682 { 00:07:44.682 "name": "BaseBdev2", 00:07:44.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.682 "is_configured": false, 00:07:44.682 "data_offset": 0, 00:07:44.682 "data_size": 0 00:07:44.682 } 00:07:44.682 ] 00:07:44.682 }' 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.682 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.942 [2024-11-18 10:36:10.759208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.942 [2024-11-18 10:36:10.759539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.942 [2024-11-18 10:36:10.759592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:44.942 [2024-11-18 10:36:10.759901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.942 BaseBdev2 00:07:44.942 [2024-11-18 10:36:10.760120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.942 [2024-11-18 10:36:10.760141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:44.942 [2024-11-18 10:36:10.760306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.942 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.942 [ 00:07:44.942 { 00:07:44.942 "name": "BaseBdev2", 00:07:44.942 "aliases": [ 00:07:44.942 "aca4dc49-7e66-4650-89b9-79abc4f992c5" 00:07:44.942 ], 00:07:44.942 "product_name": "Malloc disk", 00:07:44.942 "block_size": 512, 00:07:44.943 "num_blocks": 65536, 00:07:44.943 "uuid": "aca4dc49-7e66-4650-89b9-79abc4f992c5", 00:07:44.943 "assigned_rate_limits": { 00:07:44.943 "rw_ios_per_sec": 0, 00:07:44.943 "rw_mbytes_per_sec": 0, 00:07:44.943 "r_mbytes_per_sec": 0, 00:07:44.943 "w_mbytes_per_sec": 0 00:07:44.943 }, 00:07:44.943 "claimed": true, 00:07:44.943 "claim_type": "exclusive_write", 00:07:44.943 "zoned": false, 00:07:44.943 "supported_io_types": { 00:07:44.943 "read": true, 00:07:44.943 "write": true, 00:07:44.943 "unmap": true, 00:07:44.943 "flush": true, 00:07:44.943 "reset": true, 00:07:44.943 "nvme_admin": false, 00:07:44.943 "nvme_io": false, 00:07:44.943 "nvme_io_md": false, 00:07:44.943 "write_zeroes": true, 00:07:44.943 "zcopy": true, 00:07:44.943 "get_zone_info": false, 00:07:44.943 "zone_management": false, 00:07:44.943 "zone_append": false, 00:07:44.943 "compare": false, 00:07:44.943 "compare_and_write": false, 00:07:44.943 "abort": true, 00:07:44.943 "seek_hole": false, 00:07:44.943 "seek_data": false, 00:07:44.943 "copy": true, 00:07:44.943 "nvme_iov_md": false 00:07:44.943 }, 00:07:44.943 "memory_domains": [ 00:07:44.943 { 00:07:44.943 "dma_device_id": "system", 00:07:44.943 "dma_device_type": 1 00:07:44.943 }, 00:07:44.943 { 00:07:44.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.943 "dma_device_type": 2 00:07:44.943 } 00:07:44.943 ], 00:07:44.943 "driver_specific": {} 00:07:44.943 } 00:07:44.943 ] 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.943 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.203 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.203 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.203 "name": "Existed_Raid", 00:07:45.203 "uuid": "61bb4829-333f-49eb-b545-9874d6960a36", 00:07:45.203 "strip_size_kb": 0, 00:07:45.203 "state": "online", 00:07:45.203 "raid_level": "raid1", 00:07:45.203 "superblock": true, 00:07:45.203 "num_base_bdevs": 2, 00:07:45.203 "num_base_bdevs_discovered": 2, 00:07:45.203 "num_base_bdevs_operational": 2, 00:07:45.203 "base_bdevs_list": [ 00:07:45.203 { 00:07:45.203 "name": "BaseBdev1", 00:07:45.203 "uuid": "323f96b2-c516-46fb-b28c-9a6acb0f554a", 00:07:45.203 "is_configured": true, 00:07:45.203 "data_offset": 2048, 00:07:45.203 "data_size": 63488 00:07:45.203 }, 00:07:45.203 { 00:07:45.203 "name": "BaseBdev2", 00:07:45.203 "uuid": "aca4dc49-7e66-4650-89b9-79abc4f992c5", 00:07:45.203 "is_configured": true, 00:07:45.203 "data_offset": 2048, 00:07:45.203 "data_size": 63488 00:07:45.203 } 00:07:45.203 ] 00:07:45.203 }' 00:07:45.203 10:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.203 10:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.462 [2024-11-18 10:36:11.214654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.462 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.462 "name": "Existed_Raid", 00:07:45.462 "aliases": [ 00:07:45.462 "61bb4829-333f-49eb-b545-9874d6960a36" 00:07:45.462 ], 00:07:45.462 "product_name": "Raid Volume", 00:07:45.462 "block_size": 512, 00:07:45.462 "num_blocks": 63488, 00:07:45.462 "uuid": "61bb4829-333f-49eb-b545-9874d6960a36", 00:07:45.462 "assigned_rate_limits": { 00:07:45.462 "rw_ios_per_sec": 0, 00:07:45.462 "rw_mbytes_per_sec": 0, 00:07:45.462 "r_mbytes_per_sec": 0, 00:07:45.462 "w_mbytes_per_sec": 0 00:07:45.462 }, 00:07:45.462 "claimed": false, 00:07:45.463 "zoned": false, 00:07:45.463 "supported_io_types": { 00:07:45.463 "read": true, 00:07:45.463 "write": true, 00:07:45.463 "unmap": false, 00:07:45.463 "flush": false, 00:07:45.463 "reset": true, 00:07:45.463 "nvme_admin": false, 00:07:45.463 "nvme_io": false, 00:07:45.463 "nvme_io_md": false, 00:07:45.463 "write_zeroes": true, 00:07:45.463 "zcopy": false, 00:07:45.463 "get_zone_info": false, 00:07:45.463 "zone_management": false, 00:07:45.463 "zone_append": false, 00:07:45.463 "compare": false, 00:07:45.463 "compare_and_write": false, 00:07:45.463 "abort": false, 00:07:45.463 "seek_hole": false, 00:07:45.463 "seek_data": false, 00:07:45.463 "copy": false, 00:07:45.463 "nvme_iov_md": false 00:07:45.463 }, 00:07:45.463 "memory_domains": [ 00:07:45.463 { 00:07:45.463 "dma_device_id": "system", 00:07:45.463 "dma_device_type": 1 00:07:45.463 }, 00:07:45.463 { 00:07:45.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.463 "dma_device_type": 2 00:07:45.463 }, 00:07:45.463 { 00:07:45.463 "dma_device_id": "system", 00:07:45.463 "dma_device_type": 1 00:07:45.463 }, 00:07:45.463 { 00:07:45.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.463 "dma_device_type": 2 00:07:45.463 } 00:07:45.463 ], 00:07:45.463 "driver_specific": { 00:07:45.463 "raid": { 00:07:45.463 "uuid": "61bb4829-333f-49eb-b545-9874d6960a36", 00:07:45.463 "strip_size_kb": 0, 00:07:45.463 "state": "online", 00:07:45.463 "raid_level": "raid1", 00:07:45.463 "superblock": true, 00:07:45.463 "num_base_bdevs": 2, 00:07:45.463 "num_base_bdevs_discovered": 2, 00:07:45.463 "num_base_bdevs_operational": 2, 00:07:45.463 "base_bdevs_list": [ 00:07:45.463 { 00:07:45.463 "name": "BaseBdev1", 00:07:45.463 "uuid": "323f96b2-c516-46fb-b28c-9a6acb0f554a", 00:07:45.463 "is_configured": true, 00:07:45.463 "data_offset": 2048, 00:07:45.463 "data_size": 63488 00:07:45.463 }, 00:07:45.463 { 00:07:45.463 "name": "BaseBdev2", 00:07:45.463 "uuid": "aca4dc49-7e66-4650-89b9-79abc4f992c5", 00:07:45.463 "is_configured": true, 00:07:45.463 "data_offset": 2048, 00:07:45.463 "data_size": 63488 00:07:45.463 } 00:07:45.463 ] 00:07:45.463 } 00:07:45.463 } 00:07:45.463 }' 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:45.463 BaseBdev2' 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.463 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.722 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.723 [2024-11-18 10:36:11.434090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.723 "name": "Existed_Raid", 00:07:45.723 "uuid": "61bb4829-333f-49eb-b545-9874d6960a36", 00:07:45.723 "strip_size_kb": 0, 00:07:45.723 "state": "online", 00:07:45.723 "raid_level": "raid1", 00:07:45.723 "superblock": true, 00:07:45.723 "num_base_bdevs": 2, 00:07:45.723 "num_base_bdevs_discovered": 1, 00:07:45.723 "num_base_bdevs_operational": 1, 00:07:45.723 "base_bdevs_list": [ 00:07:45.723 { 00:07:45.723 "name": null, 00:07:45.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.723 "is_configured": false, 00:07:45.723 "data_offset": 0, 00:07:45.723 "data_size": 63488 00:07:45.723 }, 00:07:45.723 { 00:07:45.723 "name": "BaseBdev2", 00:07:45.723 "uuid": "aca4dc49-7e66-4650-89b9-79abc4f992c5", 00:07:45.723 "is_configured": true, 00:07:45.723 "data_offset": 2048, 00:07:45.723 "data_size": 63488 00:07:45.723 } 00:07:45.723 ] 00:07:45.723 }' 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.723 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.292 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.293 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.293 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.293 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.293 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.293 10:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.293 10:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.293 [2024-11-18 10:36:12.030440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.293 [2024-11-18 10:36:12.030602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.293 [2024-11-18 10:36:12.132549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.293 [2024-11-18 10:36:12.132698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.293 [2024-11-18 10:36:12.132742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.293 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62844 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62844 ']' 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62844 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62844 00:07:46.553 killing process with pid 62844 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62844' 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62844 00:07:46.553 [2024-11-18 10:36:12.218561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.553 10:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62844 00:07:46.553 [2024-11-18 10:36:12.236370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.936 10:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:47.936 00:07:47.936 real 0m4.992s 00:07:47.936 user 0m7.050s 00:07:47.936 sys 0m0.863s 00:07:47.936 10:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.936 10:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.936 ************************************ 00:07:47.936 END TEST raid_state_function_test_sb 00:07:47.936 ************************************ 00:07:47.936 10:36:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:47.936 10:36:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:47.936 10:36:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.936 10:36:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.936 ************************************ 00:07:47.936 START TEST raid_superblock_test 00:07:47.936 ************************************ 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63096 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63096 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63096 ']' 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.936 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.937 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.937 10:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.937 [2024-11-18 10:36:13.569723] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:47.937 [2024-11-18 10:36:13.569859] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63096 ] 00:07:47.937 [2024-11-18 10:36:13.745434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.197 [2024-11-18 10:36:13.877343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.457 [2024-11-18 10:36:14.107542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.457 [2024-11-18 10:36:14.107585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.717 malloc1 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.717 [2024-11-18 10:36:14.442501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.717 [2024-11-18 10:36:14.442624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.717 [2024-11-18 10:36:14.442669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:48.717 [2024-11-18 10:36:14.442700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.717 [2024-11-18 10:36:14.445117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.717 [2024-11-18 10:36:14.445210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.717 pt1 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.717 malloc2 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.717 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.717 [2024-11-18 10:36:14.507975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.717 [2024-11-18 10:36:14.508066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.717 [2024-11-18 10:36:14.508106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:48.718 [2024-11-18 10:36:14.508139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.718 [2024-11-18 10:36:14.510506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.718 [2024-11-18 10:36:14.510574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.718 pt2 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.718 [2024-11-18 10:36:14.520014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.718 [2024-11-18 10:36:14.522088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.718 [2024-11-18 10:36:14.522291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:48.718 [2024-11-18 10:36:14.522341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:48.718 [2024-11-18 10:36:14.522574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.718 [2024-11-18 10:36:14.522733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:48.718 [2024-11-18 10:36:14.522748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:48.718 [2024-11-18 10:36:14.522890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.718 "name": "raid_bdev1", 00:07:48.718 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:48.718 "strip_size_kb": 0, 00:07:48.718 "state": "online", 00:07:48.718 "raid_level": "raid1", 00:07:48.718 "superblock": true, 00:07:48.718 "num_base_bdevs": 2, 00:07:48.718 "num_base_bdevs_discovered": 2, 00:07:48.718 "num_base_bdevs_operational": 2, 00:07:48.718 "base_bdevs_list": [ 00:07:48.718 { 00:07:48.718 "name": "pt1", 00:07:48.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.718 "is_configured": true, 00:07:48.718 "data_offset": 2048, 00:07:48.718 "data_size": 63488 00:07:48.718 }, 00:07:48.718 { 00:07:48.718 "name": "pt2", 00:07:48.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.718 "is_configured": true, 00:07:48.718 "data_offset": 2048, 00:07:48.718 "data_size": 63488 00:07:48.718 } 00:07:48.718 ] 00:07:48.718 }' 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.718 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.287 [2024-11-18 10:36:14.951557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.287 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.287 "name": "raid_bdev1", 00:07:49.287 "aliases": [ 00:07:49.287 "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe" 00:07:49.287 ], 00:07:49.287 "product_name": "Raid Volume", 00:07:49.287 "block_size": 512, 00:07:49.287 "num_blocks": 63488, 00:07:49.287 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:49.287 "assigned_rate_limits": { 00:07:49.287 "rw_ios_per_sec": 0, 00:07:49.287 "rw_mbytes_per_sec": 0, 00:07:49.287 "r_mbytes_per_sec": 0, 00:07:49.287 "w_mbytes_per_sec": 0 00:07:49.287 }, 00:07:49.287 "claimed": false, 00:07:49.287 "zoned": false, 00:07:49.287 "supported_io_types": { 00:07:49.287 "read": true, 00:07:49.287 "write": true, 00:07:49.287 "unmap": false, 00:07:49.287 "flush": false, 00:07:49.287 "reset": true, 00:07:49.287 "nvme_admin": false, 00:07:49.288 "nvme_io": false, 00:07:49.288 "nvme_io_md": false, 00:07:49.288 "write_zeroes": true, 00:07:49.288 "zcopy": false, 00:07:49.288 "get_zone_info": false, 00:07:49.288 "zone_management": false, 00:07:49.288 "zone_append": false, 00:07:49.288 "compare": false, 00:07:49.288 "compare_and_write": false, 00:07:49.288 "abort": false, 00:07:49.288 "seek_hole": false, 00:07:49.288 "seek_data": false, 00:07:49.288 "copy": false, 00:07:49.288 "nvme_iov_md": false 00:07:49.288 }, 00:07:49.288 "memory_domains": [ 00:07:49.288 { 00:07:49.288 "dma_device_id": "system", 00:07:49.288 "dma_device_type": 1 00:07:49.288 }, 00:07:49.288 { 00:07:49.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.288 "dma_device_type": 2 00:07:49.288 }, 00:07:49.288 { 00:07:49.288 "dma_device_id": "system", 00:07:49.288 "dma_device_type": 1 00:07:49.288 }, 00:07:49.288 { 00:07:49.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.288 "dma_device_type": 2 00:07:49.288 } 00:07:49.288 ], 00:07:49.288 "driver_specific": { 00:07:49.288 "raid": { 00:07:49.288 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:49.288 "strip_size_kb": 0, 00:07:49.288 "state": "online", 00:07:49.288 "raid_level": "raid1", 00:07:49.288 "superblock": true, 00:07:49.288 "num_base_bdevs": 2, 00:07:49.288 "num_base_bdevs_discovered": 2, 00:07:49.288 "num_base_bdevs_operational": 2, 00:07:49.288 "base_bdevs_list": [ 00:07:49.288 { 00:07:49.288 "name": "pt1", 00:07:49.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.288 "is_configured": true, 00:07:49.288 "data_offset": 2048, 00:07:49.288 "data_size": 63488 00:07:49.288 }, 00:07:49.288 { 00:07:49.288 "name": "pt2", 00:07:49.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.288 "is_configured": true, 00:07:49.288 "data_offset": 2048, 00:07:49.288 "data_size": 63488 00:07:49.288 } 00:07:49.288 ] 00:07:49.288 } 00:07:49.288 } 00:07:49.288 }' 00:07:49.288 10:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.288 pt2' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.288 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.548 [2024-11-18 10:36:15.187117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24cffeb6-483e-417f-ade7-ec3f7ffdc6fe 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24cffeb6-483e-417f-ade7-ec3f7ffdc6fe ']' 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.548 [2024-11-18 10:36:15.230753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.548 [2024-11-18 10:36:15.230816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.548 [2024-11-18 10:36:15.230897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.548 [2024-11-18 10:36:15.230969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.548 [2024-11-18 10:36:15.230987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.548 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.549 [2024-11-18 10:36:15.362558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:49.549 [2024-11-18 10:36:15.364699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:49.549 [2024-11-18 10:36:15.364808] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:49.549 [2024-11-18 10:36:15.364858] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:49.549 [2024-11-18 10:36:15.364871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.549 [2024-11-18 10:36:15.364880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:49.549 request: 00:07:49.549 { 00:07:49.549 "name": "raid_bdev1", 00:07:49.549 "raid_level": "raid1", 00:07:49.549 "base_bdevs": [ 00:07:49.549 "malloc1", 00:07:49.549 "malloc2" 00:07:49.549 ], 00:07:49.549 "superblock": false, 00:07:49.549 "method": "bdev_raid_create", 00:07:49.549 "req_id": 1 00:07:49.549 } 00:07:49.549 Got JSON-RPC error response 00:07:49.549 response: 00:07:49.549 { 00:07:49.549 "code": -17, 00:07:49.549 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:49.549 } 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.549 [2024-11-18 10:36:15.410449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.549 [2024-11-18 10:36:15.410534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.549 [2024-11-18 10:36:15.410565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:49.549 [2024-11-18 10:36:15.410596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.549 [2024-11-18 10:36:15.413000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.549 [2024-11-18 10:36:15.413073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.549 [2024-11-18 10:36:15.413164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:49.549 [2024-11-18 10:36:15.413256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.549 pt1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.549 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.809 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.809 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.809 "name": "raid_bdev1", 00:07:49.809 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:49.809 "strip_size_kb": 0, 00:07:49.809 "state": "configuring", 00:07:49.809 "raid_level": "raid1", 00:07:49.809 "superblock": true, 00:07:49.809 "num_base_bdevs": 2, 00:07:49.809 "num_base_bdevs_discovered": 1, 00:07:49.809 "num_base_bdevs_operational": 2, 00:07:49.809 "base_bdevs_list": [ 00:07:49.809 { 00:07:49.809 "name": "pt1", 00:07:49.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.809 "is_configured": true, 00:07:49.809 "data_offset": 2048, 00:07:49.809 "data_size": 63488 00:07:49.809 }, 00:07:49.809 { 00:07:49.809 "name": null, 00:07:49.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.809 "is_configured": false, 00:07:49.809 "data_offset": 2048, 00:07:49.809 "data_size": 63488 00:07:49.809 } 00:07:49.809 ] 00:07:49.809 }' 00:07:49.809 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.809 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.068 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.068 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.068 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.068 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.068 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.068 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.069 [2024-11-18 10:36:15.853804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.069 [2024-11-18 10:36:15.853937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.069 [2024-11-18 10:36:15.853964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.069 [2024-11-18 10:36:15.853976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.069 [2024-11-18 10:36:15.854509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.069 [2024-11-18 10:36:15.854531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.069 [2024-11-18 10:36:15.854621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.069 [2024-11-18 10:36:15.854647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.069 [2024-11-18 10:36:15.854781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.069 [2024-11-18 10:36:15.854793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.069 [2024-11-18 10:36:15.855067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.069 [2024-11-18 10:36:15.855261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.069 [2024-11-18 10:36:15.855272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.069 [2024-11-18 10:36:15.855415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.069 pt2 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.069 "name": "raid_bdev1", 00:07:50.069 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:50.069 "strip_size_kb": 0, 00:07:50.069 "state": "online", 00:07:50.069 "raid_level": "raid1", 00:07:50.069 "superblock": true, 00:07:50.069 "num_base_bdevs": 2, 00:07:50.069 "num_base_bdevs_discovered": 2, 00:07:50.069 "num_base_bdevs_operational": 2, 00:07:50.069 "base_bdevs_list": [ 00:07:50.069 { 00:07:50.069 "name": "pt1", 00:07:50.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.069 "is_configured": true, 00:07:50.069 "data_offset": 2048, 00:07:50.069 "data_size": 63488 00:07:50.069 }, 00:07:50.069 { 00:07:50.069 "name": "pt2", 00:07:50.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.069 "is_configured": true, 00:07:50.069 "data_offset": 2048, 00:07:50.069 "data_size": 63488 00:07:50.069 } 00:07:50.069 ] 00:07:50.069 }' 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.069 10:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.639 [2024-11-18 10:36:16.269324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.639 "name": "raid_bdev1", 00:07:50.639 "aliases": [ 00:07:50.639 "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe" 00:07:50.639 ], 00:07:50.639 "product_name": "Raid Volume", 00:07:50.639 "block_size": 512, 00:07:50.639 "num_blocks": 63488, 00:07:50.639 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:50.639 "assigned_rate_limits": { 00:07:50.639 "rw_ios_per_sec": 0, 00:07:50.639 "rw_mbytes_per_sec": 0, 00:07:50.639 "r_mbytes_per_sec": 0, 00:07:50.639 "w_mbytes_per_sec": 0 00:07:50.639 }, 00:07:50.639 "claimed": false, 00:07:50.639 "zoned": false, 00:07:50.639 "supported_io_types": { 00:07:50.639 "read": true, 00:07:50.639 "write": true, 00:07:50.639 "unmap": false, 00:07:50.639 "flush": false, 00:07:50.639 "reset": true, 00:07:50.639 "nvme_admin": false, 00:07:50.639 "nvme_io": false, 00:07:50.639 "nvme_io_md": false, 00:07:50.639 "write_zeroes": true, 00:07:50.639 "zcopy": false, 00:07:50.639 "get_zone_info": false, 00:07:50.639 "zone_management": false, 00:07:50.639 "zone_append": false, 00:07:50.639 "compare": false, 00:07:50.639 "compare_and_write": false, 00:07:50.639 "abort": false, 00:07:50.639 "seek_hole": false, 00:07:50.639 "seek_data": false, 00:07:50.639 "copy": false, 00:07:50.639 "nvme_iov_md": false 00:07:50.639 }, 00:07:50.639 "memory_domains": [ 00:07:50.639 { 00:07:50.639 "dma_device_id": "system", 00:07:50.639 "dma_device_type": 1 00:07:50.639 }, 00:07:50.639 { 00:07:50.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.639 "dma_device_type": 2 00:07:50.639 }, 00:07:50.639 { 00:07:50.639 "dma_device_id": "system", 00:07:50.639 "dma_device_type": 1 00:07:50.639 }, 00:07:50.639 { 00:07:50.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.639 "dma_device_type": 2 00:07:50.639 } 00:07:50.639 ], 00:07:50.639 "driver_specific": { 00:07:50.639 "raid": { 00:07:50.639 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:50.639 "strip_size_kb": 0, 00:07:50.639 "state": "online", 00:07:50.639 "raid_level": "raid1", 00:07:50.639 "superblock": true, 00:07:50.639 "num_base_bdevs": 2, 00:07:50.639 "num_base_bdevs_discovered": 2, 00:07:50.639 "num_base_bdevs_operational": 2, 00:07:50.639 "base_bdevs_list": [ 00:07:50.639 { 00:07:50.639 "name": "pt1", 00:07:50.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.639 "is_configured": true, 00:07:50.639 "data_offset": 2048, 00:07:50.639 "data_size": 63488 00:07:50.639 }, 00:07:50.639 { 00:07:50.639 "name": "pt2", 00:07:50.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.639 "is_configured": true, 00:07:50.639 "data_offset": 2048, 00:07:50.639 "data_size": 63488 00:07:50.639 } 00:07:50.639 ] 00:07:50.639 } 00:07:50.639 } 00:07:50.639 }' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:50.639 pt2' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.639 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:50.640 [2024-11-18 10:36:16.468975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24cffeb6-483e-417f-ade7-ec3f7ffdc6fe '!=' 24cffeb6-483e-417f-ade7-ec3f7ffdc6fe ']' 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.640 [2024-11-18 10:36:16.492724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.640 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.899 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.899 "name": "raid_bdev1", 00:07:50.899 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:50.899 "strip_size_kb": 0, 00:07:50.899 "state": "online", 00:07:50.899 "raid_level": "raid1", 00:07:50.899 "superblock": true, 00:07:50.899 "num_base_bdevs": 2, 00:07:50.899 "num_base_bdevs_discovered": 1, 00:07:50.899 "num_base_bdevs_operational": 1, 00:07:50.899 "base_bdevs_list": [ 00:07:50.899 { 00:07:50.899 "name": null, 00:07:50.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.899 "is_configured": false, 00:07:50.899 "data_offset": 0, 00:07:50.899 "data_size": 63488 00:07:50.899 }, 00:07:50.899 { 00:07:50.899 "name": "pt2", 00:07:50.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.899 "is_configured": true, 00:07:50.899 "data_offset": 2048, 00:07:50.899 "data_size": 63488 00:07:50.899 } 00:07:50.899 ] 00:07:50.899 }' 00:07:50.899 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.899 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.159 [2024-11-18 10:36:16.935987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.159 [2024-11-18 10:36:16.936066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.159 [2024-11-18 10:36:16.936203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.159 [2024-11-18 10:36:16.936281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.159 [2024-11-18 10:36:16.936319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:51.159 10:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.159 [2024-11-18 10:36:17.007819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:51.159 [2024-11-18 10:36:17.007883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.159 [2024-11-18 10:36:17.007903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:51.159 [2024-11-18 10:36:17.007916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.159 [2024-11-18 10:36:17.010389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.159 [2024-11-18 10:36:17.010427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:51.159 [2024-11-18 10:36:17.010526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:51.159 [2024-11-18 10:36:17.010577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.159 [2024-11-18 10:36:17.010676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:51.159 [2024-11-18 10:36:17.010688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.159 [2024-11-18 10:36:17.010908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:51.159 [2024-11-18 10:36:17.011124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:51.159 [2024-11-18 10:36:17.011135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:51.159 [2024-11-18 10:36:17.011302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.159 pt2 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.159 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.441 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.441 "name": "raid_bdev1", 00:07:51.441 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:51.441 "strip_size_kb": 0, 00:07:51.441 "state": "online", 00:07:51.441 "raid_level": "raid1", 00:07:51.441 "superblock": true, 00:07:51.441 "num_base_bdevs": 2, 00:07:51.441 "num_base_bdevs_discovered": 1, 00:07:51.441 "num_base_bdevs_operational": 1, 00:07:51.441 "base_bdevs_list": [ 00:07:51.441 { 00:07:51.441 "name": null, 00:07:51.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.441 "is_configured": false, 00:07:51.441 "data_offset": 2048, 00:07:51.441 "data_size": 63488 00:07:51.441 }, 00:07:51.441 { 00:07:51.441 "name": "pt2", 00:07:51.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.441 "is_configured": true, 00:07:51.441 "data_offset": 2048, 00:07:51.441 "data_size": 63488 00:07:51.441 } 00:07:51.441 ] 00:07:51.441 }' 00:07:51.441 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.441 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 [2024-11-18 10:36:17.403199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.701 [2024-11-18 10:36:17.403288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.701 [2024-11-18 10:36:17.403404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.701 [2024-11-18 10:36:17.403481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.701 [2024-11-18 10:36:17.403551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.701 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 [2024-11-18 10:36:17.443124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:51.701 [2024-11-18 10:36:17.443250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.701 [2024-11-18 10:36:17.443293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:51.701 [2024-11-18 10:36:17.443325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.701 [2024-11-18 10:36:17.445861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.701 [2024-11-18 10:36:17.445935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:51.701 [2024-11-18 10:36:17.446053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:51.701 [2024-11-18 10:36:17.446125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:51.701 [2024-11-18 10:36:17.446335] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:51.701 [2024-11-18 10:36:17.446393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.701 [2024-11-18 10:36:17.446434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:51.701 [2024-11-18 10:36:17.446536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.702 [2024-11-18 10:36:17.446650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:51.702 [2024-11-18 10:36:17.446686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.702 [2024-11-18 10:36:17.446978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:51.702 [2024-11-18 10:36:17.447180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:51.702 pt1 00:07:51.702 [2024-11-18 10:36:17.447231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:51.702 [2024-11-18 10:36:17.447427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.702 "name": "raid_bdev1", 00:07:51.702 "uuid": "24cffeb6-483e-417f-ade7-ec3f7ffdc6fe", 00:07:51.702 "strip_size_kb": 0, 00:07:51.702 "state": "online", 00:07:51.702 "raid_level": "raid1", 00:07:51.702 "superblock": true, 00:07:51.702 "num_base_bdevs": 2, 00:07:51.702 "num_base_bdevs_discovered": 1, 00:07:51.702 "num_base_bdevs_operational": 1, 00:07:51.702 "base_bdevs_list": [ 00:07:51.702 { 00:07:51.702 "name": null, 00:07:51.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.702 "is_configured": false, 00:07:51.702 "data_offset": 2048, 00:07:51.702 "data_size": 63488 00:07:51.702 }, 00:07:51.702 { 00:07:51.702 "name": "pt2", 00:07:51.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.702 "is_configured": true, 00:07:51.702 "data_offset": 2048, 00:07:51.702 "data_size": 63488 00:07:51.702 } 00:07:51.702 ] 00:07:51.702 }' 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.702 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.271 [2024-11-18 10:36:17.918823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24cffeb6-483e-417f-ade7-ec3f7ffdc6fe '!=' 24cffeb6-483e-417f-ade7-ec3f7ffdc6fe ']' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63096 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63096 ']' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63096 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63096 00:07:52.271 killing process with pid 63096 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63096' 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63096 00:07:52.271 [2024-11-18 10:36:17.978399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.271 [2024-11-18 10:36:17.978491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.271 [2024-11-18 10:36:17.978539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.271 [2024-11-18 10:36:17.978555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:52.271 10:36:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63096 00:07:52.531 [2024-11-18 10:36:18.191894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.471 10:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:53.471 00:07:53.471 real 0m5.862s 00:07:53.471 user 0m8.701s 00:07:53.471 sys 0m1.038s 00:07:53.471 10:36:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.471 ************************************ 00:07:53.471 END TEST raid_superblock_test 00:07:53.471 ************************************ 00:07:53.471 10:36:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.731 10:36:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:53.731 10:36:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.731 10:36:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.731 10:36:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.731 ************************************ 00:07:53.731 START TEST raid_read_error_test 00:07:53.731 ************************************ 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xIarCJmzNZ 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63420 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63420 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63420 ']' 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.731 10:36:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.731 [2024-11-18 10:36:19.524564] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:53.731 [2024-11-18 10:36:19.524772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63420 ] 00:07:53.996 [2024-11-18 10:36:19.698288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.996 [2024-11-18 10:36:19.837120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.260 [2024-11-18 10:36:20.067357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.260 [2024-11-18 10:36:20.067519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.520 BaseBdev1_malloc 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.520 true 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.520 [2024-11-18 10:36:20.395956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.520 [2024-11-18 10:36:20.396058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.520 [2024-11-18 10:36:20.396107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.520 [2024-11-18 10:36:20.396138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.520 [2024-11-18 10:36:20.398483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.520 [2024-11-18 10:36:20.398559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.520 BaseBdev1 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.520 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.779 BaseBdev2_malloc 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.779 true 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.779 [2024-11-18 10:36:20.465285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.779 [2024-11-18 10:36:20.465391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.779 [2024-11-18 10:36:20.465414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.779 [2024-11-18 10:36:20.465425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.779 [2024-11-18 10:36:20.467736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.779 [2024-11-18 10:36:20.467776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.779 BaseBdev2 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.779 [2024-11-18 10:36:20.473344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.779 [2024-11-18 10:36:20.475409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.779 [2024-11-18 10:36:20.475594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.779 [2024-11-18 10:36:20.475609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.779 [2024-11-18 10:36:20.475823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.779 [2024-11-18 10:36:20.476004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.779 [2024-11-18 10:36:20.476014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:54.779 [2024-11-18 10:36:20.476148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.779 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.779 "name": "raid_bdev1", 00:07:54.779 "uuid": "8b4c313d-e2c0-4283-8c63-d5901774e789", 00:07:54.779 "strip_size_kb": 0, 00:07:54.779 "state": "online", 00:07:54.779 "raid_level": "raid1", 00:07:54.779 "superblock": true, 00:07:54.779 "num_base_bdevs": 2, 00:07:54.779 "num_base_bdevs_discovered": 2, 00:07:54.779 "num_base_bdevs_operational": 2, 00:07:54.779 "base_bdevs_list": [ 00:07:54.779 { 00:07:54.779 "name": "BaseBdev1", 00:07:54.780 "uuid": "3ed9cbb5-a5da-5db3-a39d-17fe4495fe28", 00:07:54.780 "is_configured": true, 00:07:54.780 "data_offset": 2048, 00:07:54.780 "data_size": 63488 00:07:54.780 }, 00:07:54.780 { 00:07:54.780 "name": "BaseBdev2", 00:07:54.780 "uuid": "c4f91bd0-a124-5471-8a8d-3f77201d9441", 00:07:54.780 "is_configured": true, 00:07:54.780 "data_offset": 2048, 00:07:54.780 "data_size": 63488 00:07:54.780 } 00:07:54.780 ] 00:07:54.780 }' 00:07:54.780 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.780 10:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.039 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.039 10:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.298 [2024-11-18 10:36:20.997849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.238 "name": "raid_bdev1", 00:07:56.238 "uuid": "8b4c313d-e2c0-4283-8c63-d5901774e789", 00:07:56.238 "strip_size_kb": 0, 00:07:56.238 "state": "online", 00:07:56.238 "raid_level": "raid1", 00:07:56.238 "superblock": true, 00:07:56.238 "num_base_bdevs": 2, 00:07:56.238 "num_base_bdevs_discovered": 2, 00:07:56.238 "num_base_bdevs_operational": 2, 00:07:56.238 "base_bdevs_list": [ 00:07:56.238 { 00:07:56.238 "name": "BaseBdev1", 00:07:56.238 "uuid": "3ed9cbb5-a5da-5db3-a39d-17fe4495fe28", 00:07:56.238 "is_configured": true, 00:07:56.238 "data_offset": 2048, 00:07:56.238 "data_size": 63488 00:07:56.238 }, 00:07:56.238 { 00:07:56.238 "name": "BaseBdev2", 00:07:56.238 "uuid": "c4f91bd0-a124-5471-8a8d-3f77201d9441", 00:07:56.238 "is_configured": true, 00:07:56.238 "data_offset": 2048, 00:07:56.238 "data_size": 63488 00:07:56.238 } 00:07:56.238 ] 00:07:56.238 }' 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.238 10:36:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.497 [2024-11-18 10:36:22.306196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.497 [2024-11-18 10:36:22.306241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.497 [2024-11-18 10:36:22.308867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.497 [2024-11-18 10:36:22.308947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.497 [2024-11-18 10:36:22.309058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.497 [2024-11-18 10:36:22.309104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.497 { 00:07:56.497 "results": [ 00:07:56.497 { 00:07:56.497 "job": "raid_bdev1", 00:07:56.497 "core_mask": "0x1", 00:07:56.497 "workload": "randrw", 00:07:56.497 "percentage": 50, 00:07:56.497 "status": "finished", 00:07:56.497 "queue_depth": 1, 00:07:56.497 "io_size": 131072, 00:07:56.497 "runtime": 1.308953, 00:07:56.497 "iops": 14992.898904697113, 00:07:56.497 "mibps": 1874.112363087139, 00:07:56.497 "io_failed": 0, 00:07:56.497 "io_timeout": 0, 00:07:56.497 "avg_latency_us": 64.23402215114177, 00:07:56.497 "min_latency_us": 22.246288209606988, 00:07:56.497 "max_latency_us": 1402.2986899563318 00:07:56.497 } 00:07:56.497 ], 00:07:56.497 "core_count": 1 00:07:56.497 } 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63420 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63420 ']' 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63420 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63420 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63420' 00:07:56.497 killing process with pid 63420 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63420 00:07:56.497 [2024-11-18 10:36:22.358793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.497 10:36:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63420 00:07:56.757 [2024-11-18 10:36:22.502363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xIarCJmzNZ 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:58.137 00:07:58.137 real 0m4.323s 00:07:58.137 user 0m4.986s 00:07:58.137 sys 0m0.647s 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.137 10:36:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 ************************************ 00:07:58.137 END TEST raid_read_error_test 00:07:58.137 ************************************ 00:07:58.137 10:36:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:58.137 10:36:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.137 10:36:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.137 10:36:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 ************************************ 00:07:58.137 START TEST raid_write_error_test 00:07:58.137 ************************************ 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Lfy4VvVOBe 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63566 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63566 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63566 ']' 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.137 10:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 [2024-11-18 10:36:23.913711] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:58.137 [2024-11-18 10:36:23.913830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63566 ] 00:07:58.397 [2024-11-18 10:36:24.091157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.397 [2024-11-18 10:36:24.226019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.657 [2024-11-18 10:36:24.450115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.657 [2024-11-18 10:36:24.450190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.917 BaseBdev1_malloc 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.917 true 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.917 [2024-11-18 10:36:24.791118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.917 [2024-11-18 10:36:24.791192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.917 [2024-11-18 10:36:24.791214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.917 [2024-11-18 10:36:24.791238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.917 [2024-11-18 10:36:24.793627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.917 [2024-11-18 10:36:24.793665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.917 BaseBdev1 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.917 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.177 BaseBdev2_malloc 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.177 true 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.177 [2024-11-18 10:36:24.860687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:59.177 [2024-11-18 10:36:24.860742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.177 [2024-11-18 10:36:24.860757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:59.177 [2024-11-18 10:36:24.860768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.177 [2024-11-18 10:36:24.863112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.177 [2024-11-18 10:36:24.863150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:59.177 BaseBdev2 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.177 [2024-11-18 10:36:24.872735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.177 [2024-11-18 10:36:24.874815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.177 [2024-11-18 10:36:24.875124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.177 [2024-11-18 10:36:24.875146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.177 [2024-11-18 10:36:24.875399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:59.177 [2024-11-18 10:36:24.875603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.177 [2024-11-18 10:36:24.875614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.177 [2024-11-18 10:36:24.875755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.177 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.178 "name": "raid_bdev1", 00:07:59.178 "uuid": "73e69dd6-44f4-4c7c-a5fb-7fb4cc8d97ba", 00:07:59.178 "strip_size_kb": 0, 00:07:59.178 "state": "online", 00:07:59.178 "raid_level": "raid1", 00:07:59.178 "superblock": true, 00:07:59.178 "num_base_bdevs": 2, 00:07:59.178 "num_base_bdevs_discovered": 2, 00:07:59.178 "num_base_bdevs_operational": 2, 00:07:59.178 "base_bdevs_list": [ 00:07:59.178 { 00:07:59.178 "name": "BaseBdev1", 00:07:59.178 "uuid": "2dc7d180-d599-5103-b31e-1cb732eb2581", 00:07:59.178 "is_configured": true, 00:07:59.178 "data_offset": 2048, 00:07:59.178 "data_size": 63488 00:07:59.178 }, 00:07:59.178 { 00:07:59.178 "name": "BaseBdev2", 00:07:59.178 "uuid": "8d471394-a450-5d8e-981e-11b8bf5afb6f", 00:07:59.178 "is_configured": true, 00:07:59.178 "data_offset": 2048, 00:07:59.178 "data_size": 63488 00:07:59.178 } 00:07:59.178 ] 00:07:59.178 }' 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.178 10:36:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.747 10:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.747 10:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.747 [2024-11-18 10:36:25.441282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.686 [2024-11-18 10:36:26.359513] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:00.686 [2024-11-18 10:36:26.359675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:00.686 [2024-11-18 10:36:26.359914] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:00.686 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.687 "name": "raid_bdev1", 00:08:00.687 "uuid": "73e69dd6-44f4-4c7c-a5fb-7fb4cc8d97ba", 00:08:00.687 "strip_size_kb": 0, 00:08:00.687 "state": "online", 00:08:00.687 "raid_level": "raid1", 00:08:00.687 "superblock": true, 00:08:00.687 "num_base_bdevs": 2, 00:08:00.687 "num_base_bdevs_discovered": 1, 00:08:00.687 "num_base_bdevs_operational": 1, 00:08:00.687 "base_bdevs_list": [ 00:08:00.687 { 00:08:00.687 "name": null, 00:08:00.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.687 "is_configured": false, 00:08:00.687 "data_offset": 0, 00:08:00.687 "data_size": 63488 00:08:00.687 }, 00:08:00.687 { 00:08:00.687 "name": "BaseBdev2", 00:08:00.687 "uuid": "8d471394-a450-5d8e-981e-11b8bf5afb6f", 00:08:00.687 "is_configured": true, 00:08:00.687 "data_offset": 2048, 00:08:00.687 "data_size": 63488 00:08:00.687 } 00:08:00.687 ] 00:08:00.687 }' 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.687 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.946 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.946 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.946 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.946 [2024-11-18 10:36:26.792660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.946 [2024-11-18 10:36:26.792704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.946 [2024-11-18 10:36:26.795317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.946 [2024-11-18 10:36:26.795451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.946 [2024-11-18 10:36:26.795524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.946 [2024-11-18 10:36:26.795538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.946 { 00:08:00.946 "results": [ 00:08:00.946 { 00:08:00.946 "job": "raid_bdev1", 00:08:00.946 "core_mask": "0x1", 00:08:00.946 "workload": "randrw", 00:08:00.946 "percentage": 50, 00:08:00.946 "status": "finished", 00:08:00.946 "queue_depth": 1, 00:08:00.946 "io_size": 131072, 00:08:00.946 "runtime": 1.35188, 00:08:00.946 "iops": 18618.516436370093, 00:08:00.946 "mibps": 2327.3145545462617, 00:08:00.946 "io_failed": 0, 00:08:00.946 "io_timeout": 0, 00:08:00.946 "avg_latency_us": 51.28226165133858, 00:08:00.946 "min_latency_us": 20.79301310043668, 00:08:00.946 "max_latency_us": 1345.0620087336245 00:08:00.946 } 00:08:00.946 ], 00:08:00.946 "core_count": 1 00:08:00.946 } 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63566 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63566 ']' 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63566 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.947 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63566 00:08:01.206 killing process with pid 63566 00:08:01.206 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.206 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.206 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63566' 00:08:01.206 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63566 00:08:01.206 [2024-11-18 10:36:26.842730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.206 10:36:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63566 00:08:01.206 [2024-11-18 10:36:26.982201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Lfy4VvVOBe 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:02.588 00:08:02.588 real 0m4.401s 00:08:02.588 user 0m5.164s 00:08:02.588 sys 0m0.646s 00:08:02.588 ************************************ 00:08:02.588 END TEST raid_write_error_test 00:08:02.588 ************************************ 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.588 10:36:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.588 10:36:28 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:02.588 10:36:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.588 10:36:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:02.588 10:36:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.588 10:36:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.588 10:36:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.588 ************************************ 00:08:02.588 START TEST raid_state_function_test 00:08:02.588 ************************************ 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.588 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63704 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.589 Process raid pid: 63704 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63704' 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63704 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63704 ']' 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.589 10:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.589 [2024-11-18 10:36:28.371682] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:02.589 [2024-11-18 10:36:28.371859] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.849 [2024-11-18 10:36:28.545751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.849 [2024-11-18 10:36:28.675790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.109 [2024-11-18 10:36:28.903781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.109 [2024-11-18 10:36:28.903880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.368 [2024-11-18 10:36:29.203599] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.368 [2024-11-18 10:36:29.203658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.368 [2024-11-18 10:36:29.203668] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.368 [2024-11-18 10:36:29.203678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.368 [2024-11-18 10:36:29.203683] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:03.368 [2024-11-18 10:36:29.203692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.368 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.627 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.627 "name": "Existed_Raid", 00:08:03.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.627 "strip_size_kb": 64, 00:08:03.627 "state": "configuring", 00:08:03.627 "raid_level": "raid0", 00:08:03.627 "superblock": false, 00:08:03.627 "num_base_bdevs": 3, 00:08:03.627 "num_base_bdevs_discovered": 0, 00:08:03.627 "num_base_bdevs_operational": 3, 00:08:03.627 "base_bdevs_list": [ 00:08:03.627 { 00:08:03.627 "name": "BaseBdev1", 00:08:03.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.627 "is_configured": false, 00:08:03.627 "data_offset": 0, 00:08:03.627 "data_size": 0 00:08:03.627 }, 00:08:03.627 { 00:08:03.627 "name": "BaseBdev2", 00:08:03.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.627 "is_configured": false, 00:08:03.627 "data_offset": 0, 00:08:03.627 "data_size": 0 00:08:03.627 }, 00:08:03.627 { 00:08:03.627 "name": "BaseBdev3", 00:08:03.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.627 "is_configured": false, 00:08:03.627 "data_offset": 0, 00:08:03.627 "data_size": 0 00:08:03.627 } 00:08:03.627 ] 00:08:03.627 }' 00:08:03.627 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.627 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 [2024-11-18 10:36:29.622834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.886 [2024-11-18 10:36:29.622974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 [2024-11-18 10:36:29.634812] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.886 [2024-11-18 10:36:29.634898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.886 [2024-11-18 10:36:29.634929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.886 [2024-11-18 10:36:29.634982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.886 [2024-11-18 10:36:29.635017] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:03.886 [2024-11-18 10:36:29.635050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 [2024-11-18 10:36:29.689013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.886 BaseBdev1 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 [ 00:08:03.886 { 00:08:03.886 "name": "BaseBdev1", 00:08:03.886 "aliases": [ 00:08:03.886 "812778d5-d4a2-417d-a71f-6226d7925774" 00:08:03.886 ], 00:08:03.886 "product_name": "Malloc disk", 00:08:03.886 "block_size": 512, 00:08:03.886 "num_blocks": 65536, 00:08:03.886 "uuid": "812778d5-d4a2-417d-a71f-6226d7925774", 00:08:03.886 "assigned_rate_limits": { 00:08:03.886 "rw_ios_per_sec": 0, 00:08:03.886 "rw_mbytes_per_sec": 0, 00:08:03.886 "r_mbytes_per_sec": 0, 00:08:03.886 "w_mbytes_per_sec": 0 00:08:03.886 }, 00:08:03.886 "claimed": true, 00:08:03.886 "claim_type": "exclusive_write", 00:08:03.886 "zoned": false, 00:08:03.886 "supported_io_types": { 00:08:03.886 "read": true, 00:08:03.886 "write": true, 00:08:03.886 "unmap": true, 00:08:03.886 "flush": true, 00:08:03.886 "reset": true, 00:08:03.886 "nvme_admin": false, 00:08:03.886 "nvme_io": false, 00:08:03.886 "nvme_io_md": false, 00:08:03.886 "write_zeroes": true, 00:08:03.886 "zcopy": true, 00:08:03.886 "get_zone_info": false, 00:08:03.886 "zone_management": false, 00:08:03.886 "zone_append": false, 00:08:03.886 "compare": false, 00:08:03.886 "compare_and_write": false, 00:08:03.886 "abort": true, 00:08:03.886 "seek_hole": false, 00:08:03.886 "seek_data": false, 00:08:03.886 "copy": true, 00:08:03.886 "nvme_iov_md": false 00:08:03.886 }, 00:08:03.886 "memory_domains": [ 00:08:03.886 { 00:08:03.886 "dma_device_id": "system", 00:08:03.886 "dma_device_type": 1 00:08:03.886 }, 00:08:03.886 { 00:08:03.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.886 "dma_device_type": 2 00:08:03.886 } 00:08:03.886 ], 00:08:03.886 "driver_specific": {} 00:08:03.886 } 00:08:03.886 ] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.146 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.146 "name": "Existed_Raid", 00:08:04.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.146 "strip_size_kb": 64, 00:08:04.146 "state": "configuring", 00:08:04.146 "raid_level": "raid0", 00:08:04.146 "superblock": false, 00:08:04.146 "num_base_bdevs": 3, 00:08:04.146 "num_base_bdevs_discovered": 1, 00:08:04.146 "num_base_bdevs_operational": 3, 00:08:04.146 "base_bdevs_list": [ 00:08:04.146 { 00:08:04.146 "name": "BaseBdev1", 00:08:04.146 "uuid": "812778d5-d4a2-417d-a71f-6226d7925774", 00:08:04.146 "is_configured": true, 00:08:04.146 "data_offset": 0, 00:08:04.146 "data_size": 65536 00:08:04.146 }, 00:08:04.146 { 00:08:04.146 "name": "BaseBdev2", 00:08:04.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.146 "is_configured": false, 00:08:04.146 "data_offset": 0, 00:08:04.146 "data_size": 0 00:08:04.146 }, 00:08:04.146 { 00:08:04.146 "name": "BaseBdev3", 00:08:04.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.146 "is_configured": false, 00:08:04.146 "data_offset": 0, 00:08:04.146 "data_size": 0 00:08:04.146 } 00:08:04.146 ] 00:08:04.146 }' 00:08:04.146 10:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.146 10:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.406 [2024-11-18 10:36:30.192188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.406 [2024-11-18 10:36:30.192277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.406 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.406 [2024-11-18 10:36:30.200226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.406 [2024-11-18 10:36:30.202196] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.406 [2024-11-18 10:36:30.202236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.406 [2024-11-18 10:36:30.202247] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.406 [2024-11-18 10:36:30.202255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.407 "name": "Existed_Raid", 00:08:04.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.407 "strip_size_kb": 64, 00:08:04.407 "state": "configuring", 00:08:04.407 "raid_level": "raid0", 00:08:04.407 "superblock": false, 00:08:04.407 "num_base_bdevs": 3, 00:08:04.407 "num_base_bdevs_discovered": 1, 00:08:04.407 "num_base_bdevs_operational": 3, 00:08:04.407 "base_bdevs_list": [ 00:08:04.407 { 00:08:04.407 "name": "BaseBdev1", 00:08:04.407 "uuid": "812778d5-d4a2-417d-a71f-6226d7925774", 00:08:04.407 "is_configured": true, 00:08:04.407 "data_offset": 0, 00:08:04.407 "data_size": 65536 00:08:04.407 }, 00:08:04.407 { 00:08:04.407 "name": "BaseBdev2", 00:08:04.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.407 "is_configured": false, 00:08:04.407 "data_offset": 0, 00:08:04.407 "data_size": 0 00:08:04.407 }, 00:08:04.407 { 00:08:04.407 "name": "BaseBdev3", 00:08:04.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.407 "is_configured": false, 00:08:04.407 "data_offset": 0, 00:08:04.407 "data_size": 0 00:08:04.407 } 00:08:04.407 ] 00:08:04.407 }' 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.407 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.975 [2024-11-18 10:36:30.687012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.975 BaseBdev2 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.975 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.976 [ 00:08:04.976 { 00:08:04.976 "name": "BaseBdev2", 00:08:04.976 "aliases": [ 00:08:04.976 "a719a897-eaf2-4164-be50-1e75ea428131" 00:08:04.976 ], 00:08:04.976 "product_name": "Malloc disk", 00:08:04.976 "block_size": 512, 00:08:04.976 "num_blocks": 65536, 00:08:04.976 "uuid": "a719a897-eaf2-4164-be50-1e75ea428131", 00:08:04.976 "assigned_rate_limits": { 00:08:04.976 "rw_ios_per_sec": 0, 00:08:04.976 "rw_mbytes_per_sec": 0, 00:08:04.976 "r_mbytes_per_sec": 0, 00:08:04.976 "w_mbytes_per_sec": 0 00:08:04.976 }, 00:08:04.976 "claimed": true, 00:08:04.976 "claim_type": "exclusive_write", 00:08:04.976 "zoned": false, 00:08:04.976 "supported_io_types": { 00:08:04.976 "read": true, 00:08:04.976 "write": true, 00:08:04.976 "unmap": true, 00:08:04.976 "flush": true, 00:08:04.976 "reset": true, 00:08:04.976 "nvme_admin": false, 00:08:04.976 "nvme_io": false, 00:08:04.976 "nvme_io_md": false, 00:08:04.976 "write_zeroes": true, 00:08:04.976 "zcopy": true, 00:08:04.976 "get_zone_info": false, 00:08:04.976 "zone_management": false, 00:08:04.976 "zone_append": false, 00:08:04.976 "compare": false, 00:08:04.976 "compare_and_write": false, 00:08:04.976 "abort": true, 00:08:04.976 "seek_hole": false, 00:08:04.976 "seek_data": false, 00:08:04.976 "copy": true, 00:08:04.976 "nvme_iov_md": false 00:08:04.976 }, 00:08:04.976 "memory_domains": [ 00:08:04.976 { 00:08:04.976 "dma_device_id": "system", 00:08:04.976 "dma_device_type": 1 00:08:04.976 }, 00:08:04.976 { 00:08:04.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.976 "dma_device_type": 2 00:08:04.976 } 00:08:04.976 ], 00:08:04.976 "driver_specific": {} 00:08:04.976 } 00:08:04.976 ] 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.976 "name": "Existed_Raid", 00:08:04.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.976 "strip_size_kb": 64, 00:08:04.976 "state": "configuring", 00:08:04.976 "raid_level": "raid0", 00:08:04.976 "superblock": false, 00:08:04.976 "num_base_bdevs": 3, 00:08:04.976 "num_base_bdevs_discovered": 2, 00:08:04.976 "num_base_bdevs_operational": 3, 00:08:04.976 "base_bdevs_list": [ 00:08:04.976 { 00:08:04.976 "name": "BaseBdev1", 00:08:04.976 "uuid": "812778d5-d4a2-417d-a71f-6226d7925774", 00:08:04.976 "is_configured": true, 00:08:04.976 "data_offset": 0, 00:08:04.976 "data_size": 65536 00:08:04.976 }, 00:08:04.976 { 00:08:04.976 "name": "BaseBdev2", 00:08:04.976 "uuid": "a719a897-eaf2-4164-be50-1e75ea428131", 00:08:04.976 "is_configured": true, 00:08:04.976 "data_offset": 0, 00:08:04.976 "data_size": 65536 00:08:04.976 }, 00:08:04.976 { 00:08:04.976 "name": "BaseBdev3", 00:08:04.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.976 "is_configured": false, 00:08:04.976 "data_offset": 0, 00:08:04.976 "data_size": 0 00:08:04.976 } 00:08:04.976 ] 00:08:04.976 }' 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.976 10:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.546 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:05.546 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.547 [2024-11-18 10:36:31.214490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:05.547 [2024-11-18 10:36:31.214531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.547 [2024-11-18 10:36:31.214547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:05.547 [2024-11-18 10:36:31.215044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:05.547 [2024-11-18 10:36:31.215266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.547 [2024-11-18 10:36:31.215277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:05.547 BaseBdev3 00:08:05.547 [2024-11-18 10:36:31.215545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.547 [ 00:08:05.547 { 00:08:05.547 "name": "BaseBdev3", 00:08:05.547 "aliases": [ 00:08:05.547 "f2467256-9435-4557-b3e8-aeb429b3705a" 00:08:05.547 ], 00:08:05.547 "product_name": "Malloc disk", 00:08:05.547 "block_size": 512, 00:08:05.547 "num_blocks": 65536, 00:08:05.547 "uuid": "f2467256-9435-4557-b3e8-aeb429b3705a", 00:08:05.547 "assigned_rate_limits": { 00:08:05.547 "rw_ios_per_sec": 0, 00:08:05.547 "rw_mbytes_per_sec": 0, 00:08:05.547 "r_mbytes_per_sec": 0, 00:08:05.547 "w_mbytes_per_sec": 0 00:08:05.547 }, 00:08:05.547 "claimed": true, 00:08:05.547 "claim_type": "exclusive_write", 00:08:05.547 "zoned": false, 00:08:05.547 "supported_io_types": { 00:08:05.547 "read": true, 00:08:05.547 "write": true, 00:08:05.547 "unmap": true, 00:08:05.547 "flush": true, 00:08:05.547 "reset": true, 00:08:05.547 "nvme_admin": false, 00:08:05.547 "nvme_io": false, 00:08:05.547 "nvme_io_md": false, 00:08:05.547 "write_zeroes": true, 00:08:05.547 "zcopy": true, 00:08:05.547 "get_zone_info": false, 00:08:05.547 "zone_management": false, 00:08:05.547 "zone_append": false, 00:08:05.547 "compare": false, 00:08:05.547 "compare_and_write": false, 00:08:05.547 "abort": true, 00:08:05.547 "seek_hole": false, 00:08:05.547 "seek_data": false, 00:08:05.547 "copy": true, 00:08:05.547 "nvme_iov_md": false 00:08:05.547 }, 00:08:05.547 "memory_domains": [ 00:08:05.547 { 00:08:05.547 "dma_device_id": "system", 00:08:05.547 "dma_device_type": 1 00:08:05.547 }, 00:08:05.547 { 00:08:05.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.547 "dma_device_type": 2 00:08:05.547 } 00:08:05.547 ], 00:08:05.547 "driver_specific": {} 00:08:05.547 } 00:08:05.547 ] 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.547 "name": "Existed_Raid", 00:08:05.547 "uuid": "9d8fee86-b8c5-43cb-8f8c-8a5dac043b80", 00:08:05.547 "strip_size_kb": 64, 00:08:05.547 "state": "online", 00:08:05.547 "raid_level": "raid0", 00:08:05.547 "superblock": false, 00:08:05.547 "num_base_bdevs": 3, 00:08:05.547 "num_base_bdevs_discovered": 3, 00:08:05.547 "num_base_bdevs_operational": 3, 00:08:05.547 "base_bdevs_list": [ 00:08:05.547 { 00:08:05.547 "name": "BaseBdev1", 00:08:05.547 "uuid": "812778d5-d4a2-417d-a71f-6226d7925774", 00:08:05.547 "is_configured": true, 00:08:05.547 "data_offset": 0, 00:08:05.547 "data_size": 65536 00:08:05.547 }, 00:08:05.547 { 00:08:05.547 "name": "BaseBdev2", 00:08:05.547 "uuid": "a719a897-eaf2-4164-be50-1e75ea428131", 00:08:05.547 "is_configured": true, 00:08:05.547 "data_offset": 0, 00:08:05.547 "data_size": 65536 00:08:05.547 }, 00:08:05.547 { 00:08:05.547 "name": "BaseBdev3", 00:08:05.547 "uuid": "f2467256-9435-4557-b3e8-aeb429b3705a", 00:08:05.547 "is_configured": true, 00:08:05.547 "data_offset": 0, 00:08:05.547 "data_size": 65536 00:08:05.547 } 00:08:05.547 ] 00:08:05.547 }' 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.547 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 [2024-11-18 10:36:31.646037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.807 "name": "Existed_Raid", 00:08:05.807 "aliases": [ 00:08:05.807 "9d8fee86-b8c5-43cb-8f8c-8a5dac043b80" 00:08:05.807 ], 00:08:05.807 "product_name": "Raid Volume", 00:08:05.807 "block_size": 512, 00:08:05.807 "num_blocks": 196608, 00:08:05.807 "uuid": "9d8fee86-b8c5-43cb-8f8c-8a5dac043b80", 00:08:05.807 "assigned_rate_limits": { 00:08:05.807 "rw_ios_per_sec": 0, 00:08:05.807 "rw_mbytes_per_sec": 0, 00:08:05.807 "r_mbytes_per_sec": 0, 00:08:05.807 "w_mbytes_per_sec": 0 00:08:05.807 }, 00:08:05.807 "claimed": false, 00:08:05.807 "zoned": false, 00:08:05.807 "supported_io_types": { 00:08:05.807 "read": true, 00:08:05.807 "write": true, 00:08:05.807 "unmap": true, 00:08:05.807 "flush": true, 00:08:05.807 "reset": true, 00:08:05.807 "nvme_admin": false, 00:08:05.807 "nvme_io": false, 00:08:05.807 "nvme_io_md": false, 00:08:05.807 "write_zeroes": true, 00:08:05.807 "zcopy": false, 00:08:05.807 "get_zone_info": false, 00:08:05.807 "zone_management": false, 00:08:05.807 "zone_append": false, 00:08:05.807 "compare": false, 00:08:05.807 "compare_and_write": false, 00:08:05.807 "abort": false, 00:08:05.807 "seek_hole": false, 00:08:05.807 "seek_data": false, 00:08:05.807 "copy": false, 00:08:05.807 "nvme_iov_md": false 00:08:05.807 }, 00:08:05.807 "memory_domains": [ 00:08:05.807 { 00:08:05.807 "dma_device_id": "system", 00:08:05.807 "dma_device_type": 1 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.807 "dma_device_type": 2 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "dma_device_id": "system", 00:08:05.807 "dma_device_type": 1 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.807 "dma_device_type": 2 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "dma_device_id": "system", 00:08:05.807 "dma_device_type": 1 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.807 "dma_device_type": 2 00:08:05.807 } 00:08:05.807 ], 00:08:05.807 "driver_specific": { 00:08:05.807 "raid": { 00:08:05.807 "uuid": "9d8fee86-b8c5-43cb-8f8c-8a5dac043b80", 00:08:05.807 "strip_size_kb": 64, 00:08:05.807 "state": "online", 00:08:05.807 "raid_level": "raid0", 00:08:05.807 "superblock": false, 00:08:05.807 "num_base_bdevs": 3, 00:08:05.807 "num_base_bdevs_discovered": 3, 00:08:05.807 "num_base_bdevs_operational": 3, 00:08:05.807 "base_bdevs_list": [ 00:08:05.807 { 00:08:05.807 "name": "BaseBdev1", 00:08:05.807 "uuid": "812778d5-d4a2-417d-a71f-6226d7925774", 00:08:05.807 "is_configured": true, 00:08:05.807 "data_offset": 0, 00:08:05.807 "data_size": 65536 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "name": "BaseBdev2", 00:08:05.807 "uuid": "a719a897-eaf2-4164-be50-1e75ea428131", 00:08:05.807 "is_configured": true, 00:08:05.807 "data_offset": 0, 00:08:05.807 "data_size": 65536 00:08:05.807 }, 00:08:05.807 { 00:08:05.807 "name": "BaseBdev3", 00:08:05.807 "uuid": "f2467256-9435-4557-b3e8-aeb429b3705a", 00:08:05.807 "is_configured": true, 00:08:05.807 "data_offset": 0, 00:08:05.807 "data_size": 65536 00:08:05.807 } 00:08:05.807 ] 00:08:05.807 } 00:08:05.807 } 00:08:05.807 }' 00:08:05.807 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.068 BaseBdev2 00:08:06.068 BaseBdev3' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.068 10:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 [2024-11-18 10:36:31.917345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.068 [2024-11-18 10:36:31.917415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.068 [2024-11-18 10:36:31.917473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.328 "name": "Existed_Raid", 00:08:06.328 "uuid": "9d8fee86-b8c5-43cb-8f8c-8a5dac043b80", 00:08:06.328 "strip_size_kb": 64, 00:08:06.328 "state": "offline", 00:08:06.328 "raid_level": "raid0", 00:08:06.328 "superblock": false, 00:08:06.328 "num_base_bdevs": 3, 00:08:06.328 "num_base_bdevs_discovered": 2, 00:08:06.328 "num_base_bdevs_operational": 2, 00:08:06.328 "base_bdevs_list": [ 00:08:06.328 { 00:08:06.328 "name": null, 00:08:06.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.328 "is_configured": false, 00:08:06.328 "data_offset": 0, 00:08:06.328 "data_size": 65536 00:08:06.328 }, 00:08:06.328 { 00:08:06.328 "name": "BaseBdev2", 00:08:06.328 "uuid": "a719a897-eaf2-4164-be50-1e75ea428131", 00:08:06.328 "is_configured": true, 00:08:06.328 "data_offset": 0, 00:08:06.328 "data_size": 65536 00:08:06.328 }, 00:08:06.328 { 00:08:06.328 "name": "BaseBdev3", 00:08:06.328 "uuid": "f2467256-9435-4557-b3e8-aeb429b3705a", 00:08:06.328 "is_configured": true, 00:08:06.328 "data_offset": 0, 00:08:06.328 "data_size": 65536 00:08:06.328 } 00:08:06.328 ] 00:08:06.328 }' 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.328 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.589 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.848 [2024-11-18 10:36:32.497386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.848 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.848 [2024-11-18 10:36:32.658163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:06.848 [2024-11-18 10:36:32.658285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.109 BaseBdev2 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:07.109 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 [ 00:08:07.110 { 00:08:07.110 "name": "BaseBdev2", 00:08:07.110 "aliases": [ 00:08:07.110 "d9b05167-9921-4fa2-be30-c27c807127a0" 00:08:07.110 ], 00:08:07.110 "product_name": "Malloc disk", 00:08:07.110 "block_size": 512, 00:08:07.110 "num_blocks": 65536, 00:08:07.110 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:07.110 "assigned_rate_limits": { 00:08:07.110 "rw_ios_per_sec": 0, 00:08:07.110 "rw_mbytes_per_sec": 0, 00:08:07.110 "r_mbytes_per_sec": 0, 00:08:07.110 "w_mbytes_per_sec": 0 00:08:07.110 }, 00:08:07.110 "claimed": false, 00:08:07.110 "zoned": false, 00:08:07.110 "supported_io_types": { 00:08:07.110 "read": true, 00:08:07.110 "write": true, 00:08:07.110 "unmap": true, 00:08:07.110 "flush": true, 00:08:07.110 "reset": true, 00:08:07.110 "nvme_admin": false, 00:08:07.110 "nvme_io": false, 00:08:07.110 "nvme_io_md": false, 00:08:07.110 "write_zeroes": true, 00:08:07.110 "zcopy": true, 00:08:07.110 "get_zone_info": false, 00:08:07.110 "zone_management": false, 00:08:07.110 "zone_append": false, 00:08:07.110 "compare": false, 00:08:07.110 "compare_and_write": false, 00:08:07.110 "abort": true, 00:08:07.110 "seek_hole": false, 00:08:07.110 "seek_data": false, 00:08:07.110 "copy": true, 00:08:07.110 "nvme_iov_md": false 00:08:07.110 }, 00:08:07.110 "memory_domains": [ 00:08:07.110 { 00:08:07.110 "dma_device_id": "system", 00:08:07.110 "dma_device_type": 1 00:08:07.110 }, 00:08:07.110 { 00:08:07.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.110 "dma_device_type": 2 00:08:07.110 } 00:08:07.110 ], 00:08:07.110 "driver_specific": {} 00:08:07.110 } 00:08:07.110 ] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 BaseBdev3 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 [ 00:08:07.110 { 00:08:07.110 "name": "BaseBdev3", 00:08:07.110 "aliases": [ 00:08:07.110 "3efc0526-aee9-460c-a156-f411a828250e" 00:08:07.110 ], 00:08:07.110 "product_name": "Malloc disk", 00:08:07.110 "block_size": 512, 00:08:07.110 "num_blocks": 65536, 00:08:07.110 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:07.110 "assigned_rate_limits": { 00:08:07.110 "rw_ios_per_sec": 0, 00:08:07.110 "rw_mbytes_per_sec": 0, 00:08:07.110 "r_mbytes_per_sec": 0, 00:08:07.110 "w_mbytes_per_sec": 0 00:08:07.110 }, 00:08:07.110 "claimed": false, 00:08:07.110 "zoned": false, 00:08:07.110 "supported_io_types": { 00:08:07.110 "read": true, 00:08:07.110 "write": true, 00:08:07.110 "unmap": true, 00:08:07.110 "flush": true, 00:08:07.110 "reset": true, 00:08:07.110 "nvme_admin": false, 00:08:07.110 "nvme_io": false, 00:08:07.110 "nvme_io_md": false, 00:08:07.110 "write_zeroes": true, 00:08:07.110 "zcopy": true, 00:08:07.110 "get_zone_info": false, 00:08:07.110 "zone_management": false, 00:08:07.110 "zone_append": false, 00:08:07.110 "compare": false, 00:08:07.110 "compare_and_write": false, 00:08:07.110 "abort": true, 00:08:07.110 "seek_hole": false, 00:08:07.110 "seek_data": false, 00:08:07.110 "copy": true, 00:08:07.110 "nvme_iov_md": false 00:08:07.110 }, 00:08:07.110 "memory_domains": [ 00:08:07.110 { 00:08:07.110 "dma_device_id": "system", 00:08:07.110 "dma_device_type": 1 00:08:07.110 }, 00:08:07.110 { 00:08:07.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.110 "dma_device_type": 2 00:08:07.110 } 00:08:07.110 ], 00:08:07.110 "driver_specific": {} 00:08:07.110 } 00:08:07.110 ] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.110 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 [2024-11-18 10:36:32.991734] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.110 [2024-11-18 10:36:32.991857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.110 [2024-11-18 10:36:32.991907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.370 [2024-11-18 10:36:32.993967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.370 10:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.370 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.370 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.370 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.370 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.370 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.371 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.371 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.371 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.371 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.371 10:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.371 "name": "Existed_Raid", 00:08:07.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.371 "strip_size_kb": 64, 00:08:07.371 "state": "configuring", 00:08:07.371 "raid_level": "raid0", 00:08:07.371 "superblock": false, 00:08:07.371 "num_base_bdevs": 3, 00:08:07.371 "num_base_bdevs_discovered": 2, 00:08:07.371 "num_base_bdevs_operational": 3, 00:08:07.371 "base_bdevs_list": [ 00:08:07.371 { 00:08:07.371 "name": "BaseBdev1", 00:08:07.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.371 "is_configured": false, 00:08:07.371 "data_offset": 0, 00:08:07.371 "data_size": 0 00:08:07.371 }, 00:08:07.371 { 00:08:07.371 "name": "BaseBdev2", 00:08:07.371 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:07.371 "is_configured": true, 00:08:07.371 "data_offset": 0, 00:08:07.371 "data_size": 65536 00:08:07.371 }, 00:08:07.371 { 00:08:07.371 "name": "BaseBdev3", 00:08:07.371 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:07.371 "is_configured": true, 00:08:07.371 "data_offset": 0, 00:08:07.371 "data_size": 65536 00:08:07.371 } 00:08:07.371 ] 00:08:07.371 }' 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.371 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.631 [2024-11-18 10:36:33.359107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.631 "name": "Existed_Raid", 00:08:07.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.631 "strip_size_kb": 64, 00:08:07.631 "state": "configuring", 00:08:07.631 "raid_level": "raid0", 00:08:07.631 "superblock": false, 00:08:07.631 "num_base_bdevs": 3, 00:08:07.631 "num_base_bdevs_discovered": 1, 00:08:07.631 "num_base_bdevs_operational": 3, 00:08:07.631 "base_bdevs_list": [ 00:08:07.631 { 00:08:07.631 "name": "BaseBdev1", 00:08:07.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.631 "is_configured": false, 00:08:07.631 "data_offset": 0, 00:08:07.631 "data_size": 0 00:08:07.631 }, 00:08:07.631 { 00:08:07.631 "name": null, 00:08:07.631 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:07.631 "is_configured": false, 00:08:07.631 "data_offset": 0, 00:08:07.631 "data_size": 65536 00:08:07.631 }, 00:08:07.631 { 00:08:07.631 "name": "BaseBdev3", 00:08:07.631 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:07.631 "is_configured": true, 00:08:07.631 "data_offset": 0, 00:08:07.631 "data_size": 65536 00:08:07.631 } 00:08:07.631 ] 00:08:07.631 }' 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.631 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 [2024-11-18 10:36:33.888782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.203 BaseBdev1 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 [ 00:08:08.203 { 00:08:08.203 "name": "BaseBdev1", 00:08:08.203 "aliases": [ 00:08:08.203 "647b4c3d-0b17-4cd9-aa3b-83e0160ed969" 00:08:08.203 ], 00:08:08.203 "product_name": "Malloc disk", 00:08:08.203 "block_size": 512, 00:08:08.203 "num_blocks": 65536, 00:08:08.203 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:08.203 "assigned_rate_limits": { 00:08:08.203 "rw_ios_per_sec": 0, 00:08:08.203 "rw_mbytes_per_sec": 0, 00:08:08.203 "r_mbytes_per_sec": 0, 00:08:08.203 "w_mbytes_per_sec": 0 00:08:08.203 }, 00:08:08.203 "claimed": true, 00:08:08.203 "claim_type": "exclusive_write", 00:08:08.203 "zoned": false, 00:08:08.203 "supported_io_types": { 00:08:08.203 "read": true, 00:08:08.203 "write": true, 00:08:08.203 "unmap": true, 00:08:08.203 "flush": true, 00:08:08.203 "reset": true, 00:08:08.203 "nvme_admin": false, 00:08:08.203 "nvme_io": false, 00:08:08.203 "nvme_io_md": false, 00:08:08.203 "write_zeroes": true, 00:08:08.203 "zcopy": true, 00:08:08.203 "get_zone_info": false, 00:08:08.203 "zone_management": false, 00:08:08.203 "zone_append": false, 00:08:08.203 "compare": false, 00:08:08.203 "compare_and_write": false, 00:08:08.203 "abort": true, 00:08:08.203 "seek_hole": false, 00:08:08.203 "seek_data": false, 00:08:08.203 "copy": true, 00:08:08.203 "nvme_iov_md": false 00:08:08.203 }, 00:08:08.203 "memory_domains": [ 00:08:08.203 { 00:08:08.203 "dma_device_id": "system", 00:08:08.203 "dma_device_type": 1 00:08:08.203 }, 00:08:08.203 { 00:08:08.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.203 "dma_device_type": 2 00:08:08.203 } 00:08:08.203 ], 00:08:08.203 "driver_specific": {} 00:08:08.203 } 00:08:08.203 ] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.203 "name": "Existed_Raid", 00:08:08.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.203 "strip_size_kb": 64, 00:08:08.203 "state": "configuring", 00:08:08.203 "raid_level": "raid0", 00:08:08.203 "superblock": false, 00:08:08.203 "num_base_bdevs": 3, 00:08:08.203 "num_base_bdevs_discovered": 2, 00:08:08.203 "num_base_bdevs_operational": 3, 00:08:08.203 "base_bdevs_list": [ 00:08:08.203 { 00:08:08.203 "name": "BaseBdev1", 00:08:08.203 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:08.203 "is_configured": true, 00:08:08.203 "data_offset": 0, 00:08:08.203 "data_size": 65536 00:08:08.203 }, 00:08:08.203 { 00:08:08.203 "name": null, 00:08:08.203 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:08.203 "is_configured": false, 00:08:08.203 "data_offset": 0, 00:08:08.203 "data_size": 65536 00:08:08.203 }, 00:08:08.203 { 00:08:08.203 "name": "BaseBdev3", 00:08:08.203 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:08.203 "is_configured": true, 00:08:08.203 "data_offset": 0, 00:08:08.203 "data_size": 65536 00:08:08.203 } 00:08:08.203 ] 00:08:08.203 }' 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.203 10:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 [2024-11-18 10:36:34.387943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.772 "name": "Existed_Raid", 00:08:08.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.772 "strip_size_kb": 64, 00:08:08.772 "state": "configuring", 00:08:08.772 "raid_level": "raid0", 00:08:08.772 "superblock": false, 00:08:08.772 "num_base_bdevs": 3, 00:08:08.772 "num_base_bdevs_discovered": 1, 00:08:08.772 "num_base_bdevs_operational": 3, 00:08:08.772 "base_bdevs_list": [ 00:08:08.772 { 00:08:08.772 "name": "BaseBdev1", 00:08:08.772 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:08.772 "is_configured": true, 00:08:08.772 "data_offset": 0, 00:08:08.772 "data_size": 65536 00:08:08.772 }, 00:08:08.772 { 00:08:08.772 "name": null, 00:08:08.772 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:08.772 "is_configured": false, 00:08:08.772 "data_offset": 0, 00:08:08.772 "data_size": 65536 00:08:08.772 }, 00:08:08.772 { 00:08:08.772 "name": null, 00:08:08.772 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:08.772 "is_configured": false, 00:08:08.772 "data_offset": 0, 00:08:08.772 "data_size": 65536 00:08:08.772 } 00:08:08.772 ] 00:08:08.772 }' 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.772 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.032 [2024-11-18 10:36:34.871156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.032 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.296 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.296 "name": "Existed_Raid", 00:08:09.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.296 "strip_size_kb": 64, 00:08:09.296 "state": "configuring", 00:08:09.296 "raid_level": "raid0", 00:08:09.296 "superblock": false, 00:08:09.296 "num_base_bdevs": 3, 00:08:09.296 "num_base_bdevs_discovered": 2, 00:08:09.296 "num_base_bdevs_operational": 3, 00:08:09.296 "base_bdevs_list": [ 00:08:09.297 { 00:08:09.297 "name": "BaseBdev1", 00:08:09.297 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:09.297 "is_configured": true, 00:08:09.297 "data_offset": 0, 00:08:09.297 "data_size": 65536 00:08:09.297 }, 00:08:09.297 { 00:08:09.297 "name": null, 00:08:09.297 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:09.297 "is_configured": false, 00:08:09.297 "data_offset": 0, 00:08:09.297 "data_size": 65536 00:08:09.297 }, 00:08:09.297 { 00:08:09.297 "name": "BaseBdev3", 00:08:09.297 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:09.297 "is_configured": true, 00:08:09.297 "data_offset": 0, 00:08:09.297 "data_size": 65536 00:08:09.297 } 00:08:09.297 ] 00:08:09.297 }' 00:08:09.297 10:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.297 10:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.559 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.559 [2024-11-18 10:36:35.362773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.818 "name": "Existed_Raid", 00:08:09.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.818 "strip_size_kb": 64, 00:08:09.818 "state": "configuring", 00:08:09.818 "raid_level": "raid0", 00:08:09.818 "superblock": false, 00:08:09.818 "num_base_bdevs": 3, 00:08:09.818 "num_base_bdevs_discovered": 1, 00:08:09.818 "num_base_bdevs_operational": 3, 00:08:09.818 "base_bdevs_list": [ 00:08:09.818 { 00:08:09.818 "name": null, 00:08:09.818 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:09.818 "is_configured": false, 00:08:09.818 "data_offset": 0, 00:08:09.818 "data_size": 65536 00:08:09.818 }, 00:08:09.818 { 00:08:09.818 "name": null, 00:08:09.818 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:09.818 "is_configured": false, 00:08:09.818 "data_offset": 0, 00:08:09.818 "data_size": 65536 00:08:09.818 }, 00:08:09.818 { 00:08:09.818 "name": "BaseBdev3", 00:08:09.818 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:09.818 "is_configured": true, 00:08:09.818 "data_offset": 0, 00:08:09.818 "data_size": 65536 00:08:09.818 } 00:08:09.818 ] 00:08:09.818 }' 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.818 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.078 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.078 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.078 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.078 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.078 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.079 [2024-11-18 10:36:35.909345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.079 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.339 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.339 "name": "Existed_Raid", 00:08:10.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.339 "strip_size_kb": 64, 00:08:10.339 "state": "configuring", 00:08:10.339 "raid_level": "raid0", 00:08:10.339 "superblock": false, 00:08:10.339 "num_base_bdevs": 3, 00:08:10.339 "num_base_bdevs_discovered": 2, 00:08:10.339 "num_base_bdevs_operational": 3, 00:08:10.339 "base_bdevs_list": [ 00:08:10.339 { 00:08:10.339 "name": null, 00:08:10.339 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:10.339 "is_configured": false, 00:08:10.339 "data_offset": 0, 00:08:10.339 "data_size": 65536 00:08:10.339 }, 00:08:10.339 { 00:08:10.339 "name": "BaseBdev2", 00:08:10.339 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:10.339 "is_configured": true, 00:08:10.339 "data_offset": 0, 00:08:10.339 "data_size": 65536 00:08:10.339 }, 00:08:10.339 { 00:08:10.339 "name": "BaseBdev3", 00:08:10.339 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:10.339 "is_configured": true, 00:08:10.339 "data_offset": 0, 00:08:10.339 "data_size": 65536 00:08:10.339 } 00:08:10.339 ] 00:08:10.339 }' 00:08:10.339 10:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.339 10:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 647b4c3d-0b17-4cd9-aa3b-83e0160ed969 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.600 [2024-11-18 10:36:36.449297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:10.600 [2024-11-18 10:36:36.449411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:10.600 [2024-11-18 10:36:36.449438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:10.600 [2024-11-18 10:36:36.449730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:10.600 [2024-11-18 10:36:36.449928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:10.600 [2024-11-18 10:36:36.449966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:10.600 [2024-11-18 10:36:36.450254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.600 NewBaseBdev 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.600 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.600 [ 00:08:10.600 { 00:08:10.600 "name": "NewBaseBdev", 00:08:10.600 "aliases": [ 00:08:10.600 "647b4c3d-0b17-4cd9-aa3b-83e0160ed969" 00:08:10.600 ], 00:08:10.600 "product_name": "Malloc disk", 00:08:10.600 "block_size": 512, 00:08:10.600 "num_blocks": 65536, 00:08:10.600 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:10.600 "assigned_rate_limits": { 00:08:10.600 "rw_ios_per_sec": 0, 00:08:10.600 "rw_mbytes_per_sec": 0, 00:08:10.600 "r_mbytes_per_sec": 0, 00:08:10.600 "w_mbytes_per_sec": 0 00:08:10.600 }, 00:08:10.600 "claimed": true, 00:08:10.600 "claim_type": "exclusive_write", 00:08:10.600 "zoned": false, 00:08:10.600 "supported_io_types": { 00:08:10.600 "read": true, 00:08:10.600 "write": true, 00:08:10.600 "unmap": true, 00:08:10.600 "flush": true, 00:08:10.600 "reset": true, 00:08:10.600 "nvme_admin": false, 00:08:10.600 "nvme_io": false, 00:08:10.600 "nvme_io_md": false, 00:08:10.600 "write_zeroes": true, 00:08:10.600 "zcopy": true, 00:08:10.600 "get_zone_info": false, 00:08:10.600 "zone_management": false, 00:08:10.600 "zone_append": false, 00:08:10.600 "compare": false, 00:08:10.860 "compare_and_write": false, 00:08:10.860 "abort": true, 00:08:10.860 "seek_hole": false, 00:08:10.860 "seek_data": false, 00:08:10.860 "copy": true, 00:08:10.860 "nvme_iov_md": false 00:08:10.860 }, 00:08:10.860 "memory_domains": [ 00:08:10.860 { 00:08:10.860 "dma_device_id": "system", 00:08:10.860 "dma_device_type": 1 00:08:10.860 }, 00:08:10.860 { 00:08:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.860 "dma_device_type": 2 00:08:10.860 } 00:08:10.860 ], 00:08:10.860 "driver_specific": {} 00:08:10.860 } 00:08:10.860 ] 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.860 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.860 "name": "Existed_Raid", 00:08:10.860 "uuid": "4e3a5f11-7968-443f-beff-907e00bd68bd", 00:08:10.860 "strip_size_kb": 64, 00:08:10.860 "state": "online", 00:08:10.860 "raid_level": "raid0", 00:08:10.860 "superblock": false, 00:08:10.860 "num_base_bdevs": 3, 00:08:10.860 "num_base_bdevs_discovered": 3, 00:08:10.860 "num_base_bdevs_operational": 3, 00:08:10.860 "base_bdevs_list": [ 00:08:10.860 { 00:08:10.860 "name": "NewBaseBdev", 00:08:10.860 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:10.860 "is_configured": true, 00:08:10.860 "data_offset": 0, 00:08:10.860 "data_size": 65536 00:08:10.860 }, 00:08:10.860 { 00:08:10.860 "name": "BaseBdev2", 00:08:10.860 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:10.860 "is_configured": true, 00:08:10.860 "data_offset": 0, 00:08:10.860 "data_size": 65536 00:08:10.860 }, 00:08:10.860 { 00:08:10.860 "name": "BaseBdev3", 00:08:10.860 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:10.860 "is_configured": true, 00:08:10.860 "data_offset": 0, 00:08:10.860 "data_size": 65536 00:08:10.860 } 00:08:10.861 ] 00:08:10.861 }' 00:08:10.861 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.861 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.121 [2024-11-18 10:36:36.936737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.121 "name": "Existed_Raid", 00:08:11.121 "aliases": [ 00:08:11.121 "4e3a5f11-7968-443f-beff-907e00bd68bd" 00:08:11.121 ], 00:08:11.121 "product_name": "Raid Volume", 00:08:11.121 "block_size": 512, 00:08:11.121 "num_blocks": 196608, 00:08:11.121 "uuid": "4e3a5f11-7968-443f-beff-907e00bd68bd", 00:08:11.121 "assigned_rate_limits": { 00:08:11.121 "rw_ios_per_sec": 0, 00:08:11.121 "rw_mbytes_per_sec": 0, 00:08:11.121 "r_mbytes_per_sec": 0, 00:08:11.121 "w_mbytes_per_sec": 0 00:08:11.121 }, 00:08:11.121 "claimed": false, 00:08:11.121 "zoned": false, 00:08:11.121 "supported_io_types": { 00:08:11.121 "read": true, 00:08:11.121 "write": true, 00:08:11.121 "unmap": true, 00:08:11.121 "flush": true, 00:08:11.121 "reset": true, 00:08:11.121 "nvme_admin": false, 00:08:11.121 "nvme_io": false, 00:08:11.121 "nvme_io_md": false, 00:08:11.121 "write_zeroes": true, 00:08:11.121 "zcopy": false, 00:08:11.121 "get_zone_info": false, 00:08:11.121 "zone_management": false, 00:08:11.121 "zone_append": false, 00:08:11.121 "compare": false, 00:08:11.121 "compare_and_write": false, 00:08:11.121 "abort": false, 00:08:11.121 "seek_hole": false, 00:08:11.121 "seek_data": false, 00:08:11.121 "copy": false, 00:08:11.121 "nvme_iov_md": false 00:08:11.121 }, 00:08:11.121 "memory_domains": [ 00:08:11.121 { 00:08:11.121 "dma_device_id": "system", 00:08:11.121 "dma_device_type": 1 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.121 "dma_device_type": 2 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "dma_device_id": "system", 00:08:11.121 "dma_device_type": 1 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.121 "dma_device_type": 2 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "dma_device_id": "system", 00:08:11.121 "dma_device_type": 1 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.121 "dma_device_type": 2 00:08:11.121 } 00:08:11.121 ], 00:08:11.121 "driver_specific": { 00:08:11.121 "raid": { 00:08:11.121 "uuid": "4e3a5f11-7968-443f-beff-907e00bd68bd", 00:08:11.121 "strip_size_kb": 64, 00:08:11.121 "state": "online", 00:08:11.121 "raid_level": "raid0", 00:08:11.121 "superblock": false, 00:08:11.121 "num_base_bdevs": 3, 00:08:11.121 "num_base_bdevs_discovered": 3, 00:08:11.121 "num_base_bdevs_operational": 3, 00:08:11.121 "base_bdevs_list": [ 00:08:11.121 { 00:08:11.121 "name": "NewBaseBdev", 00:08:11.121 "uuid": "647b4c3d-0b17-4cd9-aa3b-83e0160ed969", 00:08:11.121 "is_configured": true, 00:08:11.121 "data_offset": 0, 00:08:11.121 "data_size": 65536 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "name": "BaseBdev2", 00:08:11.121 "uuid": "d9b05167-9921-4fa2-be30-c27c807127a0", 00:08:11.121 "is_configured": true, 00:08:11.121 "data_offset": 0, 00:08:11.121 "data_size": 65536 00:08:11.121 }, 00:08:11.121 { 00:08:11.121 "name": "BaseBdev3", 00:08:11.121 "uuid": "3efc0526-aee9-460c-a156-f411a828250e", 00:08:11.121 "is_configured": true, 00:08:11.121 "data_offset": 0, 00:08:11.121 "data_size": 65536 00:08:11.121 } 00:08:11.121 ] 00:08:11.121 } 00:08:11.121 } 00:08:11.121 }' 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:11.121 BaseBdev2 00:08:11.121 BaseBdev3' 00:08:11.121 10:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 [2024-11-18 10:36:37.188042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.407 [2024-11-18 10:36:37.188121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.407 [2024-11-18 10:36:37.188231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.407 [2024-11-18 10:36:37.188291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.407 [2024-11-18 10:36:37.188304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63704 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63704 ']' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63704 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63704 00:08:11.407 killing process with pid 63704 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63704' 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63704 00:08:11.407 [2024-11-18 10:36:37.223089] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.407 10:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63704 00:08:11.675 [2024-11-18 10:36:37.541127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.057 ************************************ 00:08:13.057 END TEST raid_state_function_test 00:08:13.057 ************************************ 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.057 00:08:13.057 real 0m10.419s 00:08:13.057 user 0m16.305s 00:08:13.057 sys 0m1.939s 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.057 10:36:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:13.057 10:36:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:13.057 10:36:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.057 10:36:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.057 ************************************ 00:08:13.057 START TEST raid_state_function_test_sb 00:08:13.057 ************************************ 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64324 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64324' 00:08:13.057 Process raid pid: 64324 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64324 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64324 ']' 00:08:13.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.057 10:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.057 [2024-11-18 10:36:38.868595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:13.057 [2024-11-18 10:36:38.868806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.317 [2024-11-18 10:36:39.042448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.317 [2024-11-18 10:36:39.176648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.577 [2024-11-18 10:36:39.412301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.577 [2024-11-18 10:36:39.412449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.836 [2024-11-18 10:36:39.691812] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.836 [2024-11-18 10:36:39.691947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.836 [2024-11-18 10:36:39.691980] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.836 [2024-11-18 10:36:39.692004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.836 [2024-11-18 10:36:39.692022] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.836 [2024-11-18 10:36:39.692073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.836 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.096 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.096 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.096 "name": "Existed_Raid", 00:08:14.096 "uuid": "ec3d7e23-e7b1-458c-8b9c-697d390306e7", 00:08:14.096 "strip_size_kb": 64, 00:08:14.096 "state": "configuring", 00:08:14.096 "raid_level": "raid0", 00:08:14.096 "superblock": true, 00:08:14.096 "num_base_bdevs": 3, 00:08:14.096 "num_base_bdevs_discovered": 0, 00:08:14.096 "num_base_bdevs_operational": 3, 00:08:14.096 "base_bdevs_list": [ 00:08:14.096 { 00:08:14.096 "name": "BaseBdev1", 00:08:14.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.096 "is_configured": false, 00:08:14.096 "data_offset": 0, 00:08:14.096 "data_size": 0 00:08:14.096 }, 00:08:14.096 { 00:08:14.096 "name": "BaseBdev2", 00:08:14.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.096 "is_configured": false, 00:08:14.096 "data_offset": 0, 00:08:14.096 "data_size": 0 00:08:14.096 }, 00:08:14.096 { 00:08:14.096 "name": "BaseBdev3", 00:08:14.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.096 "is_configured": false, 00:08:14.096 "data_offset": 0, 00:08:14.096 "data_size": 0 00:08:14.096 } 00:08:14.096 ] 00:08:14.096 }' 00:08:14.096 10:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.096 10:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.358 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.358 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.358 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.358 [2024-11-18 10:36:40.151016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.358 [2024-11-18 10:36:40.151104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:14.358 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.359 [2024-11-18 10:36:40.163003] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.359 [2024-11-18 10:36:40.163089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.359 [2024-11-18 10:36:40.163118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.359 [2024-11-18 10:36:40.163141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.359 [2024-11-18 10:36:40.163158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.359 [2024-11-18 10:36:40.163219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.359 [2024-11-18 10:36:40.217892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.359 BaseBdev1 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.359 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.620 [ 00:08:14.620 { 00:08:14.620 "name": "BaseBdev1", 00:08:14.620 "aliases": [ 00:08:14.620 "8faafc37-a529-4678-b26d-ba562ca56c5d" 00:08:14.620 ], 00:08:14.620 "product_name": "Malloc disk", 00:08:14.620 "block_size": 512, 00:08:14.620 "num_blocks": 65536, 00:08:14.620 "uuid": "8faafc37-a529-4678-b26d-ba562ca56c5d", 00:08:14.620 "assigned_rate_limits": { 00:08:14.620 "rw_ios_per_sec": 0, 00:08:14.620 "rw_mbytes_per_sec": 0, 00:08:14.620 "r_mbytes_per_sec": 0, 00:08:14.620 "w_mbytes_per_sec": 0 00:08:14.620 }, 00:08:14.620 "claimed": true, 00:08:14.620 "claim_type": "exclusive_write", 00:08:14.620 "zoned": false, 00:08:14.620 "supported_io_types": { 00:08:14.620 "read": true, 00:08:14.620 "write": true, 00:08:14.620 "unmap": true, 00:08:14.620 "flush": true, 00:08:14.620 "reset": true, 00:08:14.620 "nvme_admin": false, 00:08:14.620 "nvme_io": false, 00:08:14.620 "nvme_io_md": false, 00:08:14.620 "write_zeroes": true, 00:08:14.620 "zcopy": true, 00:08:14.620 "get_zone_info": false, 00:08:14.620 "zone_management": false, 00:08:14.620 "zone_append": false, 00:08:14.620 "compare": false, 00:08:14.620 "compare_and_write": false, 00:08:14.620 "abort": true, 00:08:14.620 "seek_hole": false, 00:08:14.620 "seek_data": false, 00:08:14.620 "copy": true, 00:08:14.620 "nvme_iov_md": false 00:08:14.620 }, 00:08:14.620 "memory_domains": [ 00:08:14.620 { 00:08:14.620 "dma_device_id": "system", 00:08:14.620 "dma_device_type": 1 00:08:14.620 }, 00:08:14.620 { 00:08:14.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.620 "dma_device_type": 2 00:08:14.620 } 00:08:14.620 ], 00:08:14.620 "driver_specific": {} 00:08:14.620 } 00:08:14.620 ] 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.620 "name": "Existed_Raid", 00:08:14.620 "uuid": "0c0bdd6e-b897-4b8c-b6d4-3be5254f7fc6", 00:08:14.620 "strip_size_kb": 64, 00:08:14.620 "state": "configuring", 00:08:14.620 "raid_level": "raid0", 00:08:14.620 "superblock": true, 00:08:14.620 "num_base_bdevs": 3, 00:08:14.620 "num_base_bdevs_discovered": 1, 00:08:14.620 "num_base_bdevs_operational": 3, 00:08:14.620 "base_bdevs_list": [ 00:08:14.620 { 00:08:14.620 "name": "BaseBdev1", 00:08:14.620 "uuid": "8faafc37-a529-4678-b26d-ba562ca56c5d", 00:08:14.620 "is_configured": true, 00:08:14.620 "data_offset": 2048, 00:08:14.620 "data_size": 63488 00:08:14.620 }, 00:08:14.620 { 00:08:14.620 "name": "BaseBdev2", 00:08:14.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.620 "is_configured": false, 00:08:14.620 "data_offset": 0, 00:08:14.620 "data_size": 0 00:08:14.620 }, 00:08:14.620 { 00:08:14.620 "name": "BaseBdev3", 00:08:14.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.620 "is_configured": false, 00:08:14.620 "data_offset": 0, 00:08:14.620 "data_size": 0 00:08:14.620 } 00:08:14.620 ] 00:08:14.620 }' 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.620 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.880 [2024-11-18 10:36:40.685085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.880 [2024-11-18 10:36:40.685192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.880 [2024-11-18 10:36:40.697127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.880 [2024-11-18 10:36:40.699285] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.880 [2024-11-18 10:36:40.699372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.880 [2024-11-18 10:36:40.699402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.880 [2024-11-18 10:36:40.699425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.880 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.881 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.881 "name": "Existed_Raid", 00:08:14.881 "uuid": "55833682-8114-49ad-9988-405c6bd90729", 00:08:14.881 "strip_size_kb": 64, 00:08:14.881 "state": "configuring", 00:08:14.881 "raid_level": "raid0", 00:08:14.881 "superblock": true, 00:08:14.881 "num_base_bdevs": 3, 00:08:14.881 "num_base_bdevs_discovered": 1, 00:08:14.881 "num_base_bdevs_operational": 3, 00:08:14.881 "base_bdevs_list": [ 00:08:14.881 { 00:08:14.881 "name": "BaseBdev1", 00:08:14.881 "uuid": "8faafc37-a529-4678-b26d-ba562ca56c5d", 00:08:14.881 "is_configured": true, 00:08:14.881 "data_offset": 2048, 00:08:14.881 "data_size": 63488 00:08:14.881 }, 00:08:14.881 { 00:08:14.881 "name": "BaseBdev2", 00:08:14.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.881 "is_configured": false, 00:08:14.881 "data_offset": 0, 00:08:14.881 "data_size": 0 00:08:14.881 }, 00:08:14.881 { 00:08:14.881 "name": "BaseBdev3", 00:08:14.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.881 "is_configured": false, 00:08:14.881 "data_offset": 0, 00:08:14.881 "data_size": 0 00:08:14.881 } 00:08:14.881 ] 00:08:14.881 }' 00:08:14.881 10:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.881 10:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.452 [2024-11-18 10:36:41.183600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.452 BaseBdev2 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.452 [ 00:08:15.452 { 00:08:15.452 "name": "BaseBdev2", 00:08:15.452 "aliases": [ 00:08:15.452 "82ad28d0-8066-4bb9-8346-c62ec1f7c7a7" 00:08:15.452 ], 00:08:15.452 "product_name": "Malloc disk", 00:08:15.452 "block_size": 512, 00:08:15.452 "num_blocks": 65536, 00:08:15.452 "uuid": "82ad28d0-8066-4bb9-8346-c62ec1f7c7a7", 00:08:15.452 "assigned_rate_limits": { 00:08:15.452 "rw_ios_per_sec": 0, 00:08:15.452 "rw_mbytes_per_sec": 0, 00:08:15.452 "r_mbytes_per_sec": 0, 00:08:15.452 "w_mbytes_per_sec": 0 00:08:15.452 }, 00:08:15.452 "claimed": true, 00:08:15.452 "claim_type": "exclusive_write", 00:08:15.452 "zoned": false, 00:08:15.452 "supported_io_types": { 00:08:15.452 "read": true, 00:08:15.452 "write": true, 00:08:15.452 "unmap": true, 00:08:15.452 "flush": true, 00:08:15.452 "reset": true, 00:08:15.452 "nvme_admin": false, 00:08:15.452 "nvme_io": false, 00:08:15.452 "nvme_io_md": false, 00:08:15.452 "write_zeroes": true, 00:08:15.452 "zcopy": true, 00:08:15.452 "get_zone_info": false, 00:08:15.452 "zone_management": false, 00:08:15.452 "zone_append": false, 00:08:15.452 "compare": false, 00:08:15.452 "compare_and_write": false, 00:08:15.452 "abort": true, 00:08:15.452 "seek_hole": false, 00:08:15.452 "seek_data": false, 00:08:15.452 "copy": true, 00:08:15.452 "nvme_iov_md": false 00:08:15.452 }, 00:08:15.452 "memory_domains": [ 00:08:15.452 { 00:08:15.452 "dma_device_id": "system", 00:08:15.452 "dma_device_type": 1 00:08:15.452 }, 00:08:15.452 { 00:08:15.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.452 "dma_device_type": 2 00:08:15.452 } 00:08:15.452 ], 00:08:15.452 "driver_specific": {} 00:08:15.452 } 00:08:15.452 ] 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.452 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.452 "name": "Existed_Raid", 00:08:15.452 "uuid": "55833682-8114-49ad-9988-405c6bd90729", 00:08:15.452 "strip_size_kb": 64, 00:08:15.452 "state": "configuring", 00:08:15.452 "raid_level": "raid0", 00:08:15.452 "superblock": true, 00:08:15.452 "num_base_bdevs": 3, 00:08:15.452 "num_base_bdevs_discovered": 2, 00:08:15.452 "num_base_bdevs_operational": 3, 00:08:15.452 "base_bdevs_list": [ 00:08:15.452 { 00:08:15.452 "name": "BaseBdev1", 00:08:15.452 "uuid": "8faafc37-a529-4678-b26d-ba562ca56c5d", 00:08:15.452 "is_configured": true, 00:08:15.452 "data_offset": 2048, 00:08:15.452 "data_size": 63488 00:08:15.452 }, 00:08:15.452 { 00:08:15.452 "name": "BaseBdev2", 00:08:15.452 "uuid": "82ad28d0-8066-4bb9-8346-c62ec1f7c7a7", 00:08:15.453 "is_configured": true, 00:08:15.453 "data_offset": 2048, 00:08:15.453 "data_size": 63488 00:08:15.453 }, 00:08:15.453 { 00:08:15.453 "name": "BaseBdev3", 00:08:15.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.453 "is_configured": false, 00:08:15.453 "data_offset": 0, 00:08:15.453 "data_size": 0 00:08:15.453 } 00:08:15.453 ] 00:08:15.453 }' 00:08:15.453 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.453 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 [2024-11-18 10:36:41.745041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.023 [2024-11-18 10:36:41.745388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.023 [2024-11-18 10:36:41.745451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.023 [2024-11-18 10:36:41.745763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:16.023 BaseBdev3 00:08:16.023 [2024-11-18 10:36:41.745944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.023 [2024-11-18 10:36:41.745953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:16.023 [2024-11-18 10:36:41.746105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 [ 00:08:16.023 { 00:08:16.023 "name": "BaseBdev3", 00:08:16.023 "aliases": [ 00:08:16.023 "6b214846-8b67-424c-b521-36dbec864eec" 00:08:16.023 ], 00:08:16.023 "product_name": "Malloc disk", 00:08:16.023 "block_size": 512, 00:08:16.023 "num_blocks": 65536, 00:08:16.023 "uuid": "6b214846-8b67-424c-b521-36dbec864eec", 00:08:16.023 "assigned_rate_limits": { 00:08:16.023 "rw_ios_per_sec": 0, 00:08:16.023 "rw_mbytes_per_sec": 0, 00:08:16.023 "r_mbytes_per_sec": 0, 00:08:16.023 "w_mbytes_per_sec": 0 00:08:16.023 }, 00:08:16.023 "claimed": true, 00:08:16.023 "claim_type": "exclusive_write", 00:08:16.023 "zoned": false, 00:08:16.023 "supported_io_types": { 00:08:16.023 "read": true, 00:08:16.023 "write": true, 00:08:16.023 "unmap": true, 00:08:16.023 "flush": true, 00:08:16.023 "reset": true, 00:08:16.023 "nvme_admin": false, 00:08:16.023 "nvme_io": false, 00:08:16.023 "nvme_io_md": false, 00:08:16.023 "write_zeroes": true, 00:08:16.023 "zcopy": true, 00:08:16.023 "get_zone_info": false, 00:08:16.023 "zone_management": false, 00:08:16.023 "zone_append": false, 00:08:16.023 "compare": false, 00:08:16.024 "compare_and_write": false, 00:08:16.024 "abort": true, 00:08:16.024 "seek_hole": false, 00:08:16.024 "seek_data": false, 00:08:16.024 "copy": true, 00:08:16.024 "nvme_iov_md": false 00:08:16.024 }, 00:08:16.024 "memory_domains": [ 00:08:16.024 { 00:08:16.024 "dma_device_id": "system", 00:08:16.024 "dma_device_type": 1 00:08:16.024 }, 00:08:16.024 { 00:08:16.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.024 "dma_device_type": 2 00:08:16.024 } 00:08:16.024 ], 00:08:16.024 "driver_specific": {} 00:08:16.024 } 00:08:16.024 ] 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.024 "name": "Existed_Raid", 00:08:16.024 "uuid": "55833682-8114-49ad-9988-405c6bd90729", 00:08:16.024 "strip_size_kb": 64, 00:08:16.024 "state": "online", 00:08:16.024 "raid_level": "raid0", 00:08:16.024 "superblock": true, 00:08:16.024 "num_base_bdevs": 3, 00:08:16.024 "num_base_bdevs_discovered": 3, 00:08:16.024 "num_base_bdevs_operational": 3, 00:08:16.024 "base_bdevs_list": [ 00:08:16.024 { 00:08:16.024 "name": "BaseBdev1", 00:08:16.024 "uuid": "8faafc37-a529-4678-b26d-ba562ca56c5d", 00:08:16.024 "is_configured": true, 00:08:16.024 "data_offset": 2048, 00:08:16.024 "data_size": 63488 00:08:16.024 }, 00:08:16.024 { 00:08:16.024 "name": "BaseBdev2", 00:08:16.024 "uuid": "82ad28d0-8066-4bb9-8346-c62ec1f7c7a7", 00:08:16.024 "is_configured": true, 00:08:16.024 "data_offset": 2048, 00:08:16.024 "data_size": 63488 00:08:16.024 }, 00:08:16.024 { 00:08:16.024 "name": "BaseBdev3", 00:08:16.024 "uuid": "6b214846-8b67-424c-b521-36dbec864eec", 00:08:16.024 "is_configured": true, 00:08:16.024 "data_offset": 2048, 00:08:16.024 "data_size": 63488 00:08:16.024 } 00:08:16.024 ] 00:08:16.024 }' 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.024 10:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.594 [2024-11-18 10:36:42.232480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.594 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.594 "name": "Existed_Raid", 00:08:16.594 "aliases": [ 00:08:16.594 "55833682-8114-49ad-9988-405c6bd90729" 00:08:16.594 ], 00:08:16.594 "product_name": "Raid Volume", 00:08:16.594 "block_size": 512, 00:08:16.594 "num_blocks": 190464, 00:08:16.594 "uuid": "55833682-8114-49ad-9988-405c6bd90729", 00:08:16.594 "assigned_rate_limits": { 00:08:16.594 "rw_ios_per_sec": 0, 00:08:16.594 "rw_mbytes_per_sec": 0, 00:08:16.594 "r_mbytes_per_sec": 0, 00:08:16.594 "w_mbytes_per_sec": 0 00:08:16.594 }, 00:08:16.594 "claimed": false, 00:08:16.594 "zoned": false, 00:08:16.594 "supported_io_types": { 00:08:16.594 "read": true, 00:08:16.594 "write": true, 00:08:16.594 "unmap": true, 00:08:16.594 "flush": true, 00:08:16.594 "reset": true, 00:08:16.594 "nvme_admin": false, 00:08:16.594 "nvme_io": false, 00:08:16.594 "nvme_io_md": false, 00:08:16.594 "write_zeroes": true, 00:08:16.594 "zcopy": false, 00:08:16.594 "get_zone_info": false, 00:08:16.594 "zone_management": false, 00:08:16.594 "zone_append": false, 00:08:16.594 "compare": false, 00:08:16.594 "compare_and_write": false, 00:08:16.594 "abort": false, 00:08:16.594 "seek_hole": false, 00:08:16.594 "seek_data": false, 00:08:16.594 "copy": false, 00:08:16.594 "nvme_iov_md": false 00:08:16.594 }, 00:08:16.594 "memory_domains": [ 00:08:16.594 { 00:08:16.594 "dma_device_id": "system", 00:08:16.594 "dma_device_type": 1 00:08:16.594 }, 00:08:16.594 { 00:08:16.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.594 "dma_device_type": 2 00:08:16.594 }, 00:08:16.594 { 00:08:16.594 "dma_device_id": "system", 00:08:16.594 "dma_device_type": 1 00:08:16.594 }, 00:08:16.594 { 00:08:16.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.594 "dma_device_type": 2 00:08:16.594 }, 00:08:16.594 { 00:08:16.594 "dma_device_id": "system", 00:08:16.594 "dma_device_type": 1 00:08:16.594 }, 00:08:16.594 { 00:08:16.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.594 "dma_device_type": 2 00:08:16.594 } 00:08:16.594 ], 00:08:16.594 "driver_specific": { 00:08:16.594 "raid": { 00:08:16.594 "uuid": "55833682-8114-49ad-9988-405c6bd90729", 00:08:16.594 "strip_size_kb": 64, 00:08:16.594 "state": "online", 00:08:16.594 "raid_level": "raid0", 00:08:16.594 "superblock": true, 00:08:16.595 "num_base_bdevs": 3, 00:08:16.595 "num_base_bdevs_discovered": 3, 00:08:16.595 "num_base_bdevs_operational": 3, 00:08:16.595 "base_bdevs_list": [ 00:08:16.595 { 00:08:16.595 "name": "BaseBdev1", 00:08:16.595 "uuid": "8faafc37-a529-4678-b26d-ba562ca56c5d", 00:08:16.595 "is_configured": true, 00:08:16.595 "data_offset": 2048, 00:08:16.595 "data_size": 63488 00:08:16.595 }, 00:08:16.595 { 00:08:16.595 "name": "BaseBdev2", 00:08:16.595 "uuid": "82ad28d0-8066-4bb9-8346-c62ec1f7c7a7", 00:08:16.595 "is_configured": true, 00:08:16.595 "data_offset": 2048, 00:08:16.595 "data_size": 63488 00:08:16.595 }, 00:08:16.595 { 00:08:16.595 "name": "BaseBdev3", 00:08:16.595 "uuid": "6b214846-8b67-424c-b521-36dbec864eec", 00:08:16.595 "is_configured": true, 00:08:16.595 "data_offset": 2048, 00:08:16.595 "data_size": 63488 00:08:16.595 } 00:08:16.595 ] 00:08:16.595 } 00:08:16.595 } 00:08:16.595 }' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.595 BaseBdev2 00:08:16.595 BaseBdev3' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.595 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.595 [2024-11-18 10:36:42.467850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.595 [2024-11-18 10:36:42.467918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.595 [2024-11-18 10:36:42.467989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.855 "name": "Existed_Raid", 00:08:16.855 "uuid": "55833682-8114-49ad-9988-405c6bd90729", 00:08:16.855 "strip_size_kb": 64, 00:08:16.855 "state": "offline", 00:08:16.855 "raid_level": "raid0", 00:08:16.855 "superblock": true, 00:08:16.855 "num_base_bdevs": 3, 00:08:16.855 "num_base_bdevs_discovered": 2, 00:08:16.855 "num_base_bdevs_operational": 2, 00:08:16.855 "base_bdevs_list": [ 00:08:16.855 { 00:08:16.855 "name": null, 00:08:16.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.855 "is_configured": false, 00:08:16.855 "data_offset": 0, 00:08:16.855 "data_size": 63488 00:08:16.855 }, 00:08:16.855 { 00:08:16.855 "name": "BaseBdev2", 00:08:16.855 "uuid": "82ad28d0-8066-4bb9-8346-c62ec1f7c7a7", 00:08:16.855 "is_configured": true, 00:08:16.855 "data_offset": 2048, 00:08:16.855 "data_size": 63488 00:08:16.855 }, 00:08:16.855 { 00:08:16.855 "name": "BaseBdev3", 00:08:16.855 "uuid": "6b214846-8b67-424c-b521-36dbec864eec", 00:08:16.855 "is_configured": true, 00:08:16.855 "data_offset": 2048, 00:08:16.855 "data_size": 63488 00:08:16.855 } 00:08:16.855 ] 00:08:16.855 }' 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.855 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.115 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:17.115 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.115 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.115 10:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.115 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.115 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.376 10:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.376 [2024-11-18 10:36:43.028261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.376 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.376 [2024-11-18 10:36:43.186917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:17.376 [2024-11-18 10:36:43.187069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.636 BaseBdev2 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.636 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.636 [ 00:08:17.636 { 00:08:17.636 "name": "BaseBdev2", 00:08:17.636 "aliases": [ 00:08:17.636 "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412" 00:08:17.636 ], 00:08:17.636 "product_name": "Malloc disk", 00:08:17.636 "block_size": 512, 00:08:17.636 "num_blocks": 65536, 00:08:17.636 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:17.636 "assigned_rate_limits": { 00:08:17.636 "rw_ios_per_sec": 0, 00:08:17.636 "rw_mbytes_per_sec": 0, 00:08:17.636 "r_mbytes_per_sec": 0, 00:08:17.636 "w_mbytes_per_sec": 0 00:08:17.636 }, 00:08:17.636 "claimed": false, 00:08:17.637 "zoned": false, 00:08:17.637 "supported_io_types": { 00:08:17.637 "read": true, 00:08:17.637 "write": true, 00:08:17.637 "unmap": true, 00:08:17.637 "flush": true, 00:08:17.637 "reset": true, 00:08:17.637 "nvme_admin": false, 00:08:17.637 "nvme_io": false, 00:08:17.637 "nvme_io_md": false, 00:08:17.637 "write_zeroes": true, 00:08:17.637 "zcopy": true, 00:08:17.637 "get_zone_info": false, 00:08:17.637 "zone_management": false, 00:08:17.637 "zone_append": false, 00:08:17.637 "compare": false, 00:08:17.637 "compare_and_write": false, 00:08:17.637 "abort": true, 00:08:17.637 "seek_hole": false, 00:08:17.637 "seek_data": false, 00:08:17.637 "copy": true, 00:08:17.637 "nvme_iov_md": false 00:08:17.637 }, 00:08:17.637 "memory_domains": [ 00:08:17.637 { 00:08:17.637 "dma_device_id": "system", 00:08:17.637 "dma_device_type": 1 00:08:17.637 }, 00:08:17.637 { 00:08:17.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.637 "dma_device_type": 2 00:08:17.637 } 00:08:17.637 ], 00:08:17.637 "driver_specific": {} 00:08:17.637 } 00:08:17.637 ] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 BaseBdev3 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 [ 00:08:17.637 { 00:08:17.637 "name": "BaseBdev3", 00:08:17.637 "aliases": [ 00:08:17.637 "7a799f45-9f34-435d-919e-e02250801b45" 00:08:17.637 ], 00:08:17.637 "product_name": "Malloc disk", 00:08:17.637 "block_size": 512, 00:08:17.637 "num_blocks": 65536, 00:08:17.637 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:17.637 "assigned_rate_limits": { 00:08:17.637 "rw_ios_per_sec": 0, 00:08:17.637 "rw_mbytes_per_sec": 0, 00:08:17.637 "r_mbytes_per_sec": 0, 00:08:17.637 "w_mbytes_per_sec": 0 00:08:17.637 }, 00:08:17.637 "claimed": false, 00:08:17.637 "zoned": false, 00:08:17.637 "supported_io_types": { 00:08:17.637 "read": true, 00:08:17.637 "write": true, 00:08:17.637 "unmap": true, 00:08:17.637 "flush": true, 00:08:17.637 "reset": true, 00:08:17.637 "nvme_admin": false, 00:08:17.637 "nvme_io": false, 00:08:17.637 "nvme_io_md": false, 00:08:17.637 "write_zeroes": true, 00:08:17.637 "zcopy": true, 00:08:17.637 "get_zone_info": false, 00:08:17.637 "zone_management": false, 00:08:17.637 "zone_append": false, 00:08:17.637 "compare": false, 00:08:17.637 "compare_and_write": false, 00:08:17.637 "abort": true, 00:08:17.637 "seek_hole": false, 00:08:17.637 "seek_data": false, 00:08:17.637 "copy": true, 00:08:17.637 "nvme_iov_md": false 00:08:17.637 }, 00:08:17.637 "memory_domains": [ 00:08:17.637 { 00:08:17.637 "dma_device_id": "system", 00:08:17.637 "dma_device_type": 1 00:08:17.637 }, 00:08:17.637 { 00:08:17.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.637 "dma_device_type": 2 00:08:17.637 } 00:08:17.637 ], 00:08:17.637 "driver_specific": {} 00:08:17.637 } 00:08:17.637 ] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 [2024-11-18 10:36:43.518366] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.637 [2024-11-18 10:36:43.518496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.637 [2024-11-18 10:36:43.518547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.898 [2024-11-18 10:36:43.520669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.898 "name": "Existed_Raid", 00:08:17.898 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:17.898 "strip_size_kb": 64, 00:08:17.898 "state": "configuring", 00:08:17.898 "raid_level": "raid0", 00:08:17.898 "superblock": true, 00:08:17.898 "num_base_bdevs": 3, 00:08:17.898 "num_base_bdevs_discovered": 2, 00:08:17.898 "num_base_bdevs_operational": 3, 00:08:17.898 "base_bdevs_list": [ 00:08:17.898 { 00:08:17.898 "name": "BaseBdev1", 00:08:17.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.898 "is_configured": false, 00:08:17.898 "data_offset": 0, 00:08:17.898 "data_size": 0 00:08:17.898 }, 00:08:17.898 { 00:08:17.898 "name": "BaseBdev2", 00:08:17.898 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:17.898 "is_configured": true, 00:08:17.898 "data_offset": 2048, 00:08:17.898 "data_size": 63488 00:08:17.898 }, 00:08:17.898 { 00:08:17.898 "name": "BaseBdev3", 00:08:17.898 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:17.898 "is_configured": true, 00:08:17.898 "data_offset": 2048, 00:08:17.898 "data_size": 63488 00:08:17.898 } 00:08:17.898 ] 00:08:17.898 }' 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.898 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.158 10:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:18.158 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.158 10:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.158 [2024-11-18 10:36:43.997516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.158 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.418 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.418 "name": "Existed_Raid", 00:08:18.418 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:18.418 "strip_size_kb": 64, 00:08:18.418 "state": "configuring", 00:08:18.418 "raid_level": "raid0", 00:08:18.418 "superblock": true, 00:08:18.418 "num_base_bdevs": 3, 00:08:18.418 "num_base_bdevs_discovered": 1, 00:08:18.418 "num_base_bdevs_operational": 3, 00:08:18.418 "base_bdevs_list": [ 00:08:18.418 { 00:08:18.418 "name": "BaseBdev1", 00:08:18.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.418 "is_configured": false, 00:08:18.418 "data_offset": 0, 00:08:18.418 "data_size": 0 00:08:18.418 }, 00:08:18.418 { 00:08:18.418 "name": null, 00:08:18.418 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:18.418 "is_configured": false, 00:08:18.418 "data_offset": 0, 00:08:18.418 "data_size": 63488 00:08:18.418 }, 00:08:18.418 { 00:08:18.418 "name": "BaseBdev3", 00:08:18.418 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:18.418 "is_configured": true, 00:08:18.418 "data_offset": 2048, 00:08:18.418 "data_size": 63488 00:08:18.418 } 00:08:18.418 ] 00:08:18.418 }' 00:08:18.418 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.418 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.678 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.679 [2024-11-18 10:36:44.509382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.679 BaseBdev1 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.679 [ 00:08:18.679 { 00:08:18.679 "name": "BaseBdev1", 00:08:18.679 "aliases": [ 00:08:18.679 "5af54404-bfc6-4046-8f8c-e854a37b97ec" 00:08:18.679 ], 00:08:18.679 "product_name": "Malloc disk", 00:08:18.679 "block_size": 512, 00:08:18.679 "num_blocks": 65536, 00:08:18.679 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:18.679 "assigned_rate_limits": { 00:08:18.679 "rw_ios_per_sec": 0, 00:08:18.679 "rw_mbytes_per_sec": 0, 00:08:18.679 "r_mbytes_per_sec": 0, 00:08:18.679 "w_mbytes_per_sec": 0 00:08:18.679 }, 00:08:18.679 "claimed": true, 00:08:18.679 "claim_type": "exclusive_write", 00:08:18.679 "zoned": false, 00:08:18.679 "supported_io_types": { 00:08:18.679 "read": true, 00:08:18.679 "write": true, 00:08:18.679 "unmap": true, 00:08:18.679 "flush": true, 00:08:18.679 "reset": true, 00:08:18.679 "nvme_admin": false, 00:08:18.679 "nvme_io": false, 00:08:18.679 "nvme_io_md": false, 00:08:18.679 "write_zeroes": true, 00:08:18.679 "zcopy": true, 00:08:18.679 "get_zone_info": false, 00:08:18.679 "zone_management": false, 00:08:18.679 "zone_append": false, 00:08:18.679 "compare": false, 00:08:18.679 "compare_and_write": false, 00:08:18.679 "abort": true, 00:08:18.679 "seek_hole": false, 00:08:18.679 "seek_data": false, 00:08:18.679 "copy": true, 00:08:18.679 "nvme_iov_md": false 00:08:18.679 }, 00:08:18.679 "memory_domains": [ 00:08:18.679 { 00:08:18.679 "dma_device_id": "system", 00:08:18.679 "dma_device_type": 1 00:08:18.679 }, 00:08:18.679 { 00:08:18.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.679 "dma_device_type": 2 00:08:18.679 } 00:08:18.679 ], 00:08:18.679 "driver_specific": {} 00:08:18.679 } 00:08:18.679 ] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.679 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.939 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.939 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.939 "name": "Existed_Raid", 00:08:18.939 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:18.939 "strip_size_kb": 64, 00:08:18.939 "state": "configuring", 00:08:18.939 "raid_level": "raid0", 00:08:18.939 "superblock": true, 00:08:18.939 "num_base_bdevs": 3, 00:08:18.939 "num_base_bdevs_discovered": 2, 00:08:18.939 "num_base_bdevs_operational": 3, 00:08:18.939 "base_bdevs_list": [ 00:08:18.939 { 00:08:18.939 "name": "BaseBdev1", 00:08:18.939 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:18.939 "is_configured": true, 00:08:18.939 "data_offset": 2048, 00:08:18.939 "data_size": 63488 00:08:18.939 }, 00:08:18.939 { 00:08:18.939 "name": null, 00:08:18.939 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:18.939 "is_configured": false, 00:08:18.939 "data_offset": 0, 00:08:18.939 "data_size": 63488 00:08:18.939 }, 00:08:18.939 { 00:08:18.939 "name": "BaseBdev3", 00:08:18.939 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:18.939 "is_configured": true, 00:08:18.939 "data_offset": 2048, 00:08:18.939 "data_size": 63488 00:08:18.939 } 00:08:18.939 ] 00:08:18.939 }' 00:08:18.939 10:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.939 10:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 [2024-11-18 10:36:45.056510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.199 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.458 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.458 "name": "Existed_Raid", 00:08:19.458 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:19.458 "strip_size_kb": 64, 00:08:19.458 "state": "configuring", 00:08:19.458 "raid_level": "raid0", 00:08:19.458 "superblock": true, 00:08:19.458 "num_base_bdevs": 3, 00:08:19.458 "num_base_bdevs_discovered": 1, 00:08:19.458 "num_base_bdevs_operational": 3, 00:08:19.458 "base_bdevs_list": [ 00:08:19.458 { 00:08:19.458 "name": "BaseBdev1", 00:08:19.458 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:19.458 "is_configured": true, 00:08:19.458 "data_offset": 2048, 00:08:19.458 "data_size": 63488 00:08:19.458 }, 00:08:19.458 { 00:08:19.458 "name": null, 00:08:19.458 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:19.458 "is_configured": false, 00:08:19.458 "data_offset": 0, 00:08:19.458 "data_size": 63488 00:08:19.458 }, 00:08:19.458 { 00:08:19.458 "name": null, 00:08:19.459 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:19.459 "is_configured": false, 00:08:19.459 "data_offset": 0, 00:08:19.459 "data_size": 63488 00:08:19.459 } 00:08:19.459 ] 00:08:19.459 }' 00:08:19.459 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.459 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.718 [2024-11-18 10:36:45.527748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.718 "name": "Existed_Raid", 00:08:19.718 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:19.718 "strip_size_kb": 64, 00:08:19.718 "state": "configuring", 00:08:19.718 "raid_level": "raid0", 00:08:19.718 "superblock": true, 00:08:19.718 "num_base_bdevs": 3, 00:08:19.718 "num_base_bdevs_discovered": 2, 00:08:19.718 "num_base_bdevs_operational": 3, 00:08:19.718 "base_bdevs_list": [ 00:08:19.718 { 00:08:19.718 "name": "BaseBdev1", 00:08:19.718 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:19.718 "is_configured": true, 00:08:19.718 "data_offset": 2048, 00:08:19.718 "data_size": 63488 00:08:19.718 }, 00:08:19.718 { 00:08:19.718 "name": null, 00:08:19.718 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:19.718 "is_configured": false, 00:08:19.718 "data_offset": 0, 00:08:19.718 "data_size": 63488 00:08:19.718 }, 00:08:19.718 { 00:08:19.718 "name": "BaseBdev3", 00:08:19.718 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:19.718 "is_configured": true, 00:08:19.718 "data_offset": 2048, 00:08:19.718 "data_size": 63488 00:08:19.718 } 00:08:19.718 ] 00:08:19.718 }' 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.718 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.319 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.319 10:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:20.319 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.319 10:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.319 [2024-11-18 10:36:46.031154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.319 "name": "Existed_Raid", 00:08:20.319 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:20.319 "strip_size_kb": 64, 00:08:20.319 "state": "configuring", 00:08:20.319 "raid_level": "raid0", 00:08:20.319 "superblock": true, 00:08:20.319 "num_base_bdevs": 3, 00:08:20.319 "num_base_bdevs_discovered": 1, 00:08:20.319 "num_base_bdevs_operational": 3, 00:08:20.319 "base_bdevs_list": [ 00:08:20.319 { 00:08:20.319 "name": null, 00:08:20.319 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:20.319 "is_configured": false, 00:08:20.319 "data_offset": 0, 00:08:20.319 "data_size": 63488 00:08:20.319 }, 00:08:20.319 { 00:08:20.319 "name": null, 00:08:20.319 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:20.319 "is_configured": false, 00:08:20.319 "data_offset": 0, 00:08:20.319 "data_size": 63488 00:08:20.319 }, 00:08:20.319 { 00:08:20.319 "name": "BaseBdev3", 00:08:20.319 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:20.319 "is_configured": true, 00:08:20.319 "data_offset": 2048, 00:08:20.319 "data_size": 63488 00:08:20.319 } 00:08:20.319 ] 00:08:20.319 }' 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.319 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.889 [2024-11-18 10:36:46.623099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.889 "name": "Existed_Raid", 00:08:20.889 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:20.889 "strip_size_kb": 64, 00:08:20.889 "state": "configuring", 00:08:20.889 "raid_level": "raid0", 00:08:20.889 "superblock": true, 00:08:20.889 "num_base_bdevs": 3, 00:08:20.889 "num_base_bdevs_discovered": 2, 00:08:20.889 "num_base_bdevs_operational": 3, 00:08:20.889 "base_bdevs_list": [ 00:08:20.889 { 00:08:20.889 "name": null, 00:08:20.889 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:20.889 "is_configured": false, 00:08:20.889 "data_offset": 0, 00:08:20.889 "data_size": 63488 00:08:20.889 }, 00:08:20.889 { 00:08:20.889 "name": "BaseBdev2", 00:08:20.889 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:20.889 "is_configured": true, 00:08:20.889 "data_offset": 2048, 00:08:20.889 "data_size": 63488 00:08:20.889 }, 00:08:20.889 { 00:08:20.889 "name": "BaseBdev3", 00:08:20.889 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:20.889 "is_configured": true, 00:08:20.889 "data_offset": 2048, 00:08:20.889 "data_size": 63488 00:08:20.889 } 00:08:20.889 ] 00:08:20.889 }' 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.889 10:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5af54404-bfc6-4046-8f8c-e854a37b97ec 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.459 [2024-11-18 10:36:47.202859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:21.459 [2024-11-18 10:36:47.203152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:21.459 [2024-11-18 10:36:47.203232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.459 [2024-11-18 10:36:47.203527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:21.459 NewBaseBdev 00:08:21.459 [2024-11-18 10:36:47.203711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:21.459 [2024-11-18 10:36:47.203723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:21.459 [2024-11-18 10:36:47.203874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.459 [ 00:08:21.459 { 00:08:21.459 "name": "NewBaseBdev", 00:08:21.459 "aliases": [ 00:08:21.459 "5af54404-bfc6-4046-8f8c-e854a37b97ec" 00:08:21.459 ], 00:08:21.459 "product_name": "Malloc disk", 00:08:21.459 "block_size": 512, 00:08:21.459 "num_blocks": 65536, 00:08:21.459 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:21.459 "assigned_rate_limits": { 00:08:21.459 "rw_ios_per_sec": 0, 00:08:21.459 "rw_mbytes_per_sec": 0, 00:08:21.459 "r_mbytes_per_sec": 0, 00:08:21.459 "w_mbytes_per_sec": 0 00:08:21.459 }, 00:08:21.459 "claimed": true, 00:08:21.459 "claim_type": "exclusive_write", 00:08:21.459 "zoned": false, 00:08:21.459 "supported_io_types": { 00:08:21.459 "read": true, 00:08:21.459 "write": true, 00:08:21.459 "unmap": true, 00:08:21.459 "flush": true, 00:08:21.459 "reset": true, 00:08:21.459 "nvme_admin": false, 00:08:21.459 "nvme_io": false, 00:08:21.459 "nvme_io_md": false, 00:08:21.459 "write_zeroes": true, 00:08:21.459 "zcopy": true, 00:08:21.459 "get_zone_info": false, 00:08:21.459 "zone_management": false, 00:08:21.459 "zone_append": false, 00:08:21.459 "compare": false, 00:08:21.459 "compare_and_write": false, 00:08:21.459 "abort": true, 00:08:21.459 "seek_hole": false, 00:08:21.459 "seek_data": false, 00:08:21.459 "copy": true, 00:08:21.459 "nvme_iov_md": false 00:08:21.459 }, 00:08:21.459 "memory_domains": [ 00:08:21.459 { 00:08:21.459 "dma_device_id": "system", 00:08:21.459 "dma_device_type": 1 00:08:21.459 }, 00:08:21.459 { 00:08:21.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.459 "dma_device_type": 2 00:08:21.459 } 00:08:21.459 ], 00:08:21.459 "driver_specific": {} 00:08:21.459 } 00:08:21.459 ] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.459 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.460 "name": "Existed_Raid", 00:08:21.460 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:21.460 "strip_size_kb": 64, 00:08:21.460 "state": "online", 00:08:21.460 "raid_level": "raid0", 00:08:21.460 "superblock": true, 00:08:21.460 "num_base_bdevs": 3, 00:08:21.460 "num_base_bdevs_discovered": 3, 00:08:21.460 "num_base_bdevs_operational": 3, 00:08:21.460 "base_bdevs_list": [ 00:08:21.460 { 00:08:21.460 "name": "NewBaseBdev", 00:08:21.460 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:21.460 "is_configured": true, 00:08:21.460 "data_offset": 2048, 00:08:21.460 "data_size": 63488 00:08:21.460 }, 00:08:21.460 { 00:08:21.460 "name": "BaseBdev2", 00:08:21.460 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:21.460 "is_configured": true, 00:08:21.460 "data_offset": 2048, 00:08:21.460 "data_size": 63488 00:08:21.460 }, 00:08:21.460 { 00:08:21.460 "name": "BaseBdev3", 00:08:21.460 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:21.460 "is_configured": true, 00:08:21.460 "data_offset": 2048, 00:08:21.460 "data_size": 63488 00:08:21.460 } 00:08:21.460 ] 00:08:21.460 }' 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.460 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.029 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.029 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.029 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.029 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.030 [2024-11-18 10:36:47.638397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.030 "name": "Existed_Raid", 00:08:22.030 "aliases": [ 00:08:22.030 "74e8f044-32cb-4948-916c-9b2b26670fcf" 00:08:22.030 ], 00:08:22.030 "product_name": "Raid Volume", 00:08:22.030 "block_size": 512, 00:08:22.030 "num_blocks": 190464, 00:08:22.030 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:22.030 "assigned_rate_limits": { 00:08:22.030 "rw_ios_per_sec": 0, 00:08:22.030 "rw_mbytes_per_sec": 0, 00:08:22.030 "r_mbytes_per_sec": 0, 00:08:22.030 "w_mbytes_per_sec": 0 00:08:22.030 }, 00:08:22.030 "claimed": false, 00:08:22.030 "zoned": false, 00:08:22.030 "supported_io_types": { 00:08:22.030 "read": true, 00:08:22.030 "write": true, 00:08:22.030 "unmap": true, 00:08:22.030 "flush": true, 00:08:22.030 "reset": true, 00:08:22.030 "nvme_admin": false, 00:08:22.030 "nvme_io": false, 00:08:22.030 "nvme_io_md": false, 00:08:22.030 "write_zeroes": true, 00:08:22.030 "zcopy": false, 00:08:22.030 "get_zone_info": false, 00:08:22.030 "zone_management": false, 00:08:22.030 "zone_append": false, 00:08:22.030 "compare": false, 00:08:22.030 "compare_and_write": false, 00:08:22.030 "abort": false, 00:08:22.030 "seek_hole": false, 00:08:22.030 "seek_data": false, 00:08:22.030 "copy": false, 00:08:22.030 "nvme_iov_md": false 00:08:22.030 }, 00:08:22.030 "memory_domains": [ 00:08:22.030 { 00:08:22.030 "dma_device_id": "system", 00:08:22.030 "dma_device_type": 1 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.030 "dma_device_type": 2 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "dma_device_id": "system", 00:08:22.030 "dma_device_type": 1 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.030 "dma_device_type": 2 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "dma_device_id": "system", 00:08:22.030 "dma_device_type": 1 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.030 "dma_device_type": 2 00:08:22.030 } 00:08:22.030 ], 00:08:22.030 "driver_specific": { 00:08:22.030 "raid": { 00:08:22.030 "uuid": "74e8f044-32cb-4948-916c-9b2b26670fcf", 00:08:22.030 "strip_size_kb": 64, 00:08:22.030 "state": "online", 00:08:22.030 "raid_level": "raid0", 00:08:22.030 "superblock": true, 00:08:22.030 "num_base_bdevs": 3, 00:08:22.030 "num_base_bdevs_discovered": 3, 00:08:22.030 "num_base_bdevs_operational": 3, 00:08:22.030 "base_bdevs_list": [ 00:08:22.030 { 00:08:22.030 "name": "NewBaseBdev", 00:08:22.030 "uuid": "5af54404-bfc6-4046-8f8c-e854a37b97ec", 00:08:22.030 "is_configured": true, 00:08:22.030 "data_offset": 2048, 00:08:22.030 "data_size": 63488 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "name": "BaseBdev2", 00:08:22.030 "uuid": "757e93d0-0b39-4ace-8cb4-c5a0e6f9d412", 00:08:22.030 "is_configured": true, 00:08:22.030 "data_offset": 2048, 00:08:22.030 "data_size": 63488 00:08:22.030 }, 00:08:22.030 { 00:08:22.030 "name": "BaseBdev3", 00:08:22.030 "uuid": "7a799f45-9f34-435d-919e-e02250801b45", 00:08:22.030 "is_configured": true, 00:08:22.030 "data_offset": 2048, 00:08:22.030 "data_size": 63488 00:08:22.030 } 00:08:22.030 ] 00:08:22.030 } 00:08:22.030 } 00:08:22.030 }' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:22.030 BaseBdev2 00:08:22.030 BaseBdev3' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.030 [2024-11-18 10:36:47.881691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.030 [2024-11-18 10:36:47.881715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.030 [2024-11-18 10:36:47.881793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.030 [2024-11-18 10:36:47.881845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.030 [2024-11-18 10:36:47.881856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64324 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64324 ']' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64324 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.030 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64324 00:08:22.290 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.290 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.290 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64324' 00:08:22.290 killing process with pid 64324 00:08:22.290 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64324 00:08:22.290 [2024-11-18 10:36:47.932339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.290 10:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64324 00:08:22.550 [2024-11-18 10:36:48.245561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.934 10:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:23.934 00:08:23.934 real 0m10.629s 00:08:23.934 user 0m16.685s 00:08:23.934 sys 0m1.990s 00:08:23.934 10:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.934 10:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 ************************************ 00:08:23.934 END TEST raid_state_function_test_sb 00:08:23.934 ************************************ 00:08:23.934 10:36:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:23.934 10:36:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:23.934 10:36:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.934 10:36:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 ************************************ 00:08:23.934 START TEST raid_superblock_test 00:08:23.934 ************************************ 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64946 00:08:23.934 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64946 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64946 ']' 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.935 10:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.935 [2024-11-18 10:36:49.568449] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:23.935 [2024-11-18 10:36:49.568561] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64946 ] 00:08:23.935 [2024-11-18 10:36:49.744052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.195 [2024-11-18 10:36:49.878235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.454 [2024-11-18 10:36:50.104720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.454 [2024-11-18 10:36:50.104781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.714 malloc1 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.714 [2024-11-18 10:36:50.437226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.714 [2024-11-18 10:36:50.437333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.714 [2024-11-18 10:36:50.437377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:24.714 [2024-11-18 10:36:50.437407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.714 [2024-11-18 10:36:50.439738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.714 [2024-11-18 10:36:50.439810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.714 pt1 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.714 malloc2 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.714 [2024-11-18 10:36:50.500882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.714 [2024-11-18 10:36:50.500936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.714 [2024-11-18 10:36:50.500960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:24.714 [2024-11-18 10:36:50.500970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.714 [2024-11-18 10:36:50.503342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.714 [2024-11-18 10:36:50.503377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.714 pt2 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:24.714 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 malloc3 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.715 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 [2024-11-18 10:36:50.596430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:24.715 [2024-11-18 10:36:50.596524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.715 [2024-11-18 10:36:50.596565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:24.715 [2024-11-18 10:36:50.596605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.974 [2024-11-18 10:36:50.599006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.974 [2024-11-18 10:36:50.599105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:24.974 pt3 00:08:24.974 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.974 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 [2024-11-18 10:36:50.608465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.975 [2024-11-18 10:36:50.610482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.975 [2024-11-18 10:36:50.610585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:24.975 [2024-11-18 10:36:50.610761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:24.975 [2024-11-18 10:36:50.610808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.975 [2024-11-18 10:36:50.611102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:24.975 [2024-11-18 10:36:50.611357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:24.975 [2024-11-18 10:36:50.611402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:24.975 [2024-11-18 10:36:50.611600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.975 "name": "raid_bdev1", 00:08:24.975 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:24.975 "strip_size_kb": 64, 00:08:24.975 "state": "online", 00:08:24.975 "raid_level": "raid0", 00:08:24.975 "superblock": true, 00:08:24.975 "num_base_bdevs": 3, 00:08:24.975 "num_base_bdevs_discovered": 3, 00:08:24.975 "num_base_bdevs_operational": 3, 00:08:24.975 "base_bdevs_list": [ 00:08:24.975 { 00:08:24.975 "name": "pt1", 00:08:24.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.975 "is_configured": true, 00:08:24.975 "data_offset": 2048, 00:08:24.975 "data_size": 63488 00:08:24.975 }, 00:08:24.975 { 00:08:24.975 "name": "pt2", 00:08:24.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.975 "is_configured": true, 00:08:24.975 "data_offset": 2048, 00:08:24.975 "data_size": 63488 00:08:24.975 }, 00:08:24.975 { 00:08:24.975 "name": "pt3", 00:08:24.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.975 "is_configured": true, 00:08:24.975 "data_offset": 2048, 00:08:24.975 "data_size": 63488 00:08:24.975 } 00:08:24.975 ] 00:08:24.975 }' 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.975 10:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.235 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.236 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.236 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.236 [2024-11-18 10:36:51.083924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.236 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.496 "name": "raid_bdev1", 00:08:25.496 "aliases": [ 00:08:25.496 "2990263d-5f58-42c8-acf1-53b62162a7a0" 00:08:25.496 ], 00:08:25.496 "product_name": "Raid Volume", 00:08:25.496 "block_size": 512, 00:08:25.496 "num_blocks": 190464, 00:08:25.496 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:25.496 "assigned_rate_limits": { 00:08:25.496 "rw_ios_per_sec": 0, 00:08:25.496 "rw_mbytes_per_sec": 0, 00:08:25.496 "r_mbytes_per_sec": 0, 00:08:25.496 "w_mbytes_per_sec": 0 00:08:25.496 }, 00:08:25.496 "claimed": false, 00:08:25.496 "zoned": false, 00:08:25.496 "supported_io_types": { 00:08:25.496 "read": true, 00:08:25.496 "write": true, 00:08:25.496 "unmap": true, 00:08:25.496 "flush": true, 00:08:25.496 "reset": true, 00:08:25.496 "nvme_admin": false, 00:08:25.496 "nvme_io": false, 00:08:25.496 "nvme_io_md": false, 00:08:25.496 "write_zeroes": true, 00:08:25.496 "zcopy": false, 00:08:25.496 "get_zone_info": false, 00:08:25.496 "zone_management": false, 00:08:25.496 "zone_append": false, 00:08:25.496 "compare": false, 00:08:25.496 "compare_and_write": false, 00:08:25.496 "abort": false, 00:08:25.496 "seek_hole": false, 00:08:25.496 "seek_data": false, 00:08:25.496 "copy": false, 00:08:25.496 "nvme_iov_md": false 00:08:25.496 }, 00:08:25.496 "memory_domains": [ 00:08:25.496 { 00:08:25.496 "dma_device_id": "system", 00:08:25.496 "dma_device_type": 1 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.496 "dma_device_type": 2 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "dma_device_id": "system", 00:08:25.496 "dma_device_type": 1 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.496 "dma_device_type": 2 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "dma_device_id": "system", 00:08:25.496 "dma_device_type": 1 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.496 "dma_device_type": 2 00:08:25.496 } 00:08:25.496 ], 00:08:25.496 "driver_specific": { 00:08:25.496 "raid": { 00:08:25.496 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:25.496 "strip_size_kb": 64, 00:08:25.496 "state": "online", 00:08:25.496 "raid_level": "raid0", 00:08:25.496 "superblock": true, 00:08:25.496 "num_base_bdevs": 3, 00:08:25.496 "num_base_bdevs_discovered": 3, 00:08:25.496 "num_base_bdevs_operational": 3, 00:08:25.496 "base_bdevs_list": [ 00:08:25.496 { 00:08:25.496 "name": "pt1", 00:08:25.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.496 "is_configured": true, 00:08:25.496 "data_offset": 2048, 00:08:25.496 "data_size": 63488 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "name": "pt2", 00:08:25.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.496 "is_configured": true, 00:08:25.496 "data_offset": 2048, 00:08:25.496 "data_size": 63488 00:08:25.496 }, 00:08:25.496 { 00:08:25.496 "name": "pt3", 00:08:25.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:25.496 "is_configured": true, 00:08:25.496 "data_offset": 2048, 00:08:25.496 "data_size": 63488 00:08:25.496 } 00:08:25.496 ] 00:08:25.496 } 00:08:25.496 } 00:08:25.496 }' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:25.496 pt2 00:08:25.496 pt3' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.496 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 [2024-11-18 10:36:51.375419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2990263d-5f58-42c8-acf1-53b62162a7a0 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2990263d-5f58-42c8-acf1-53b62162a7a0 ']' 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.756 [2024-11-18 10:36:51.419099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.756 [2024-11-18 10:36:51.419163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.756 [2024-11-18 10:36:51.419271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.756 [2024-11-18 10:36:51.419360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.756 [2024-11-18 10:36:51.419403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.756 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 [2024-11-18 10:36:51.574911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:25.757 [2024-11-18 10:36:51.577036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:25.757 [2024-11-18 10:36:51.577132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:25.757 [2024-11-18 10:36:51.577197] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:25.757 [2024-11-18 10:36:51.577242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:25.757 [2024-11-18 10:36:51.577260] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:25.757 [2024-11-18 10:36:51.577277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.757 [2024-11-18 10:36:51.577287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:25.757 request: 00:08:25.757 { 00:08:25.757 "name": "raid_bdev1", 00:08:25.757 "raid_level": "raid0", 00:08:25.757 "base_bdevs": [ 00:08:25.757 "malloc1", 00:08:25.757 "malloc2", 00:08:25.757 "malloc3" 00:08:25.757 ], 00:08:25.757 "strip_size_kb": 64, 00:08:25.757 "superblock": false, 00:08:25.757 "method": "bdev_raid_create", 00:08:25.757 "req_id": 1 00:08:25.757 } 00:08:25.757 Got JSON-RPC error response 00:08:25.757 response: 00:08:25.757 { 00:08:25.757 "code": -17, 00:08:25.757 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:25.757 } 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 [2024-11-18 10:36:51.642747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.017 [2024-11-18 10:36:51.642831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.017 [2024-11-18 10:36:51.642867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:26.017 [2024-11-18 10:36:51.642895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.017 [2024-11-18 10:36:51.645305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.017 [2024-11-18 10:36:51.645371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.017 [2024-11-18 10:36:51.645461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:26.017 [2024-11-18 10:36:51.645526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.017 pt1 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.017 "name": "raid_bdev1", 00:08:26.017 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:26.017 "strip_size_kb": 64, 00:08:26.017 "state": "configuring", 00:08:26.017 "raid_level": "raid0", 00:08:26.017 "superblock": true, 00:08:26.017 "num_base_bdevs": 3, 00:08:26.017 "num_base_bdevs_discovered": 1, 00:08:26.017 "num_base_bdevs_operational": 3, 00:08:26.017 "base_bdevs_list": [ 00:08:26.017 { 00:08:26.017 "name": "pt1", 00:08:26.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.017 "is_configured": true, 00:08:26.017 "data_offset": 2048, 00:08:26.017 "data_size": 63488 00:08:26.017 }, 00:08:26.017 { 00:08:26.017 "name": null, 00:08:26.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.017 "is_configured": false, 00:08:26.017 "data_offset": 2048, 00:08:26.017 "data_size": 63488 00:08:26.017 }, 00:08:26.017 { 00:08:26.017 "name": null, 00:08:26.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.017 "is_configured": false, 00:08:26.017 "data_offset": 2048, 00:08:26.017 "data_size": 63488 00:08:26.017 } 00:08:26.017 ] 00:08:26.017 }' 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.017 10:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.277 [2024-11-18 10:36:52.109940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.277 [2024-11-18 10:36:52.110024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.277 [2024-11-18 10:36:52.110045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:26.277 [2024-11-18 10:36:52.110053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.277 [2024-11-18 10:36:52.110462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.277 [2024-11-18 10:36:52.110480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.277 [2024-11-18 10:36:52.110547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.277 [2024-11-18 10:36:52.110564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.277 pt2 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.277 [2024-11-18 10:36:52.121949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.277 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.536 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.536 "name": "raid_bdev1", 00:08:26.536 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:26.536 "strip_size_kb": 64, 00:08:26.536 "state": "configuring", 00:08:26.536 "raid_level": "raid0", 00:08:26.536 "superblock": true, 00:08:26.536 "num_base_bdevs": 3, 00:08:26.536 "num_base_bdevs_discovered": 1, 00:08:26.536 "num_base_bdevs_operational": 3, 00:08:26.536 "base_bdevs_list": [ 00:08:26.536 { 00:08:26.536 "name": "pt1", 00:08:26.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.536 "is_configured": true, 00:08:26.536 "data_offset": 2048, 00:08:26.536 "data_size": 63488 00:08:26.536 }, 00:08:26.536 { 00:08:26.536 "name": null, 00:08:26.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.536 "is_configured": false, 00:08:26.536 "data_offset": 0, 00:08:26.537 "data_size": 63488 00:08:26.537 }, 00:08:26.537 { 00:08:26.537 "name": null, 00:08:26.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.537 "is_configured": false, 00:08:26.537 "data_offset": 2048, 00:08:26.537 "data_size": 63488 00:08:26.537 } 00:08:26.537 ] 00:08:26.537 }' 00:08:26.537 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.537 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.796 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:26.796 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.796 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 [2024-11-18 10:36:52.553226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.797 [2024-11-18 10:36:52.553316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.797 [2024-11-18 10:36:52.553347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:26.797 [2024-11-18 10:36:52.553373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.797 [2024-11-18 10:36:52.553775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.797 [2024-11-18 10:36:52.553839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.797 [2024-11-18 10:36:52.553924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.797 [2024-11-18 10:36:52.553972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.797 pt2 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 [2024-11-18 10:36:52.565210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:26.797 [2024-11-18 10:36:52.565285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.797 [2024-11-18 10:36:52.565314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:26.797 [2024-11-18 10:36:52.565338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.797 [2024-11-18 10:36:52.565690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.797 [2024-11-18 10:36:52.565753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:26.797 [2024-11-18 10:36:52.565829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:26.797 [2024-11-18 10:36:52.565874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:26.797 [2024-11-18 10:36:52.566003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:26.797 [2024-11-18 10:36:52.566040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.797 [2024-11-18 10:36:52.566316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:26.797 [2024-11-18 10:36:52.566499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:26.797 [2024-11-18 10:36:52.566534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:26.797 [2024-11-18 10:36:52.566703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.797 pt3 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.797 "name": "raid_bdev1", 00:08:26.797 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:26.797 "strip_size_kb": 64, 00:08:26.797 "state": "online", 00:08:26.797 "raid_level": "raid0", 00:08:26.797 "superblock": true, 00:08:26.797 "num_base_bdevs": 3, 00:08:26.797 "num_base_bdevs_discovered": 3, 00:08:26.797 "num_base_bdevs_operational": 3, 00:08:26.797 "base_bdevs_list": [ 00:08:26.797 { 00:08:26.797 "name": "pt1", 00:08:26.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.797 "is_configured": true, 00:08:26.797 "data_offset": 2048, 00:08:26.797 "data_size": 63488 00:08:26.797 }, 00:08:26.797 { 00:08:26.797 "name": "pt2", 00:08:26.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.797 "is_configured": true, 00:08:26.797 "data_offset": 2048, 00:08:26.797 "data_size": 63488 00:08:26.797 }, 00:08:26.797 { 00:08:26.797 "name": "pt3", 00:08:26.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:26.797 "is_configured": true, 00:08:26.797 "data_offset": 2048, 00:08:26.797 "data_size": 63488 00:08:26.797 } 00:08:26.797 ] 00:08:26.797 }' 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.797 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.367 [2024-11-18 10:36:52.976750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.367 10:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.367 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.367 "name": "raid_bdev1", 00:08:27.367 "aliases": [ 00:08:27.367 "2990263d-5f58-42c8-acf1-53b62162a7a0" 00:08:27.367 ], 00:08:27.367 "product_name": "Raid Volume", 00:08:27.367 "block_size": 512, 00:08:27.367 "num_blocks": 190464, 00:08:27.367 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:27.367 "assigned_rate_limits": { 00:08:27.367 "rw_ios_per_sec": 0, 00:08:27.367 "rw_mbytes_per_sec": 0, 00:08:27.367 "r_mbytes_per_sec": 0, 00:08:27.367 "w_mbytes_per_sec": 0 00:08:27.367 }, 00:08:27.367 "claimed": false, 00:08:27.367 "zoned": false, 00:08:27.367 "supported_io_types": { 00:08:27.367 "read": true, 00:08:27.367 "write": true, 00:08:27.367 "unmap": true, 00:08:27.368 "flush": true, 00:08:27.368 "reset": true, 00:08:27.368 "nvme_admin": false, 00:08:27.368 "nvme_io": false, 00:08:27.368 "nvme_io_md": false, 00:08:27.368 "write_zeroes": true, 00:08:27.368 "zcopy": false, 00:08:27.368 "get_zone_info": false, 00:08:27.368 "zone_management": false, 00:08:27.368 "zone_append": false, 00:08:27.368 "compare": false, 00:08:27.368 "compare_and_write": false, 00:08:27.368 "abort": false, 00:08:27.368 "seek_hole": false, 00:08:27.368 "seek_data": false, 00:08:27.368 "copy": false, 00:08:27.368 "nvme_iov_md": false 00:08:27.368 }, 00:08:27.368 "memory_domains": [ 00:08:27.368 { 00:08:27.368 "dma_device_id": "system", 00:08:27.368 "dma_device_type": 1 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.368 "dma_device_type": 2 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "dma_device_id": "system", 00:08:27.368 "dma_device_type": 1 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.368 "dma_device_type": 2 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "dma_device_id": "system", 00:08:27.368 "dma_device_type": 1 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.368 "dma_device_type": 2 00:08:27.368 } 00:08:27.368 ], 00:08:27.368 "driver_specific": { 00:08:27.368 "raid": { 00:08:27.368 "uuid": "2990263d-5f58-42c8-acf1-53b62162a7a0", 00:08:27.368 "strip_size_kb": 64, 00:08:27.368 "state": "online", 00:08:27.368 "raid_level": "raid0", 00:08:27.368 "superblock": true, 00:08:27.368 "num_base_bdevs": 3, 00:08:27.368 "num_base_bdevs_discovered": 3, 00:08:27.368 "num_base_bdevs_operational": 3, 00:08:27.368 "base_bdevs_list": [ 00:08:27.368 { 00:08:27.368 "name": "pt1", 00:08:27.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.368 "is_configured": true, 00:08:27.368 "data_offset": 2048, 00:08:27.368 "data_size": 63488 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "name": "pt2", 00:08:27.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.368 "is_configured": true, 00:08:27.368 "data_offset": 2048, 00:08:27.368 "data_size": 63488 00:08:27.368 }, 00:08:27.368 { 00:08:27.368 "name": "pt3", 00:08:27.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:27.368 "is_configured": true, 00:08:27.368 "data_offset": 2048, 00:08:27.368 "data_size": 63488 00:08:27.368 } 00:08:27.368 ] 00:08:27.368 } 00:08:27.368 } 00:08:27.368 }' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:27.368 pt2 00:08:27.368 pt3' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.368 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.368 [2024-11-18 10:36:53.244293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2990263d-5f58-42c8-acf1-53b62162a7a0 '!=' 2990263d-5f58-42c8-acf1-53b62162a7a0 ']' 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64946 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64946 ']' 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64946 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64946 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64946' 00:08:27.628 killing process with pid 64946 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64946 00:08:27.628 [2024-11-18 10:36:53.329954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.628 [2024-11-18 10:36:53.330029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.628 [2024-11-18 10:36:53.330075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.628 [2024-11-18 10:36:53.330086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.628 10:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64946 00:08:27.889 [2024-11-18 10:36:53.644260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.295 10:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:29.295 00:08:29.295 real 0m5.323s 00:08:29.295 user 0m7.480s 00:08:29.295 sys 0m0.983s 00:08:29.295 10:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.295 ************************************ 00:08:29.295 END TEST raid_superblock_test 00:08:29.295 ************************************ 00:08:29.295 10:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.295 10:36:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:29.295 10:36:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.295 10:36:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.295 10:36:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.295 ************************************ 00:08:29.295 START TEST raid_read_error_test 00:08:29.295 ************************************ 00:08:29.295 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oSJSeNVlrh 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65199 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65199 00:08:29.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65199 ']' 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.296 10:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.296 [2024-11-18 10:36:54.967950] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:29.296 [2024-11-18 10:36:54.968078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65199 ] 00:08:29.296 [2024-11-18 10:36:55.122341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.556 [2024-11-18 10:36:55.250237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.816 [2024-11-18 10:36:55.475537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.816 [2024-11-18 10:36:55.475607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.076 BaseBdev1_malloc 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.076 true 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.076 [2024-11-18 10:36:55.872898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:30.076 [2024-11-18 10:36:55.872956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.076 [2024-11-18 10:36:55.872975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:30.076 [2024-11-18 10:36:55.872986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.076 [2024-11-18 10:36:55.875347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.076 [2024-11-18 10:36:55.875428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:30.076 BaseBdev1 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.076 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.077 BaseBdev2_malloc 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.077 true 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.077 [2024-11-18 10:36:55.944156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:30.077 [2024-11-18 10:36:55.944216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.077 [2024-11-18 10:36:55.944231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:30.077 [2024-11-18 10:36:55.944243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.077 [2024-11-18 10:36:55.946461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.077 [2024-11-18 10:36:55.946544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:30.077 BaseBdev2 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.077 10:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 BaseBdev3_malloc 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 true 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 [2024-11-18 10:36:56.046545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:30.338 [2024-11-18 10:36:56.046645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.338 [2024-11-18 10:36:56.046667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:30.338 [2024-11-18 10:36:56.046682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.338 [2024-11-18 10:36:56.048974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.338 [2024-11-18 10:36:56.049014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:30.338 BaseBdev3 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 [2024-11-18 10:36:56.058602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.338 [2024-11-18 10:36:56.061025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.338 [2024-11-18 10:36:56.061106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.338 [2024-11-18 10:36:56.061320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:30.338 [2024-11-18 10:36:56.061336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.338 [2024-11-18 10:36:56.061578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:30.338 [2024-11-18 10:36:56.061744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:30.338 [2024-11-18 10:36:56.061758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:30.338 [2024-11-18 10:36:56.061906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.338 "name": "raid_bdev1", 00:08:30.338 "uuid": "07e1fa1f-44ea-4396-bf86-269e0d0bcdd6", 00:08:30.338 "strip_size_kb": 64, 00:08:30.338 "state": "online", 00:08:30.338 "raid_level": "raid0", 00:08:30.338 "superblock": true, 00:08:30.338 "num_base_bdevs": 3, 00:08:30.338 "num_base_bdevs_discovered": 3, 00:08:30.338 "num_base_bdevs_operational": 3, 00:08:30.338 "base_bdevs_list": [ 00:08:30.338 { 00:08:30.338 "name": "BaseBdev1", 00:08:30.338 "uuid": "1b21ffdf-c0c4-5fc5-8fc2-ab0f2f19c8ff", 00:08:30.338 "is_configured": true, 00:08:30.338 "data_offset": 2048, 00:08:30.338 "data_size": 63488 00:08:30.338 }, 00:08:30.338 { 00:08:30.338 "name": "BaseBdev2", 00:08:30.338 "uuid": "7ef707ac-1069-5ff9-adf5-0ddd02f81f2a", 00:08:30.338 "is_configured": true, 00:08:30.338 "data_offset": 2048, 00:08:30.338 "data_size": 63488 00:08:30.338 }, 00:08:30.338 { 00:08:30.338 "name": "BaseBdev3", 00:08:30.338 "uuid": "805f6ca5-d74d-5050-9062-bb94cc35bae0", 00:08:30.338 "is_configured": true, 00:08:30.338 "data_offset": 2048, 00:08:30.338 "data_size": 63488 00:08:30.338 } 00:08:30.338 ] 00:08:30.338 }' 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.338 10:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.911 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.911 10:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.911 [2024-11-18 10:36:56.594937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.849 "name": "raid_bdev1", 00:08:31.849 "uuid": "07e1fa1f-44ea-4396-bf86-269e0d0bcdd6", 00:08:31.849 "strip_size_kb": 64, 00:08:31.849 "state": "online", 00:08:31.849 "raid_level": "raid0", 00:08:31.849 "superblock": true, 00:08:31.849 "num_base_bdevs": 3, 00:08:31.849 "num_base_bdevs_discovered": 3, 00:08:31.849 "num_base_bdevs_operational": 3, 00:08:31.849 "base_bdevs_list": [ 00:08:31.849 { 00:08:31.849 "name": "BaseBdev1", 00:08:31.849 "uuid": "1b21ffdf-c0c4-5fc5-8fc2-ab0f2f19c8ff", 00:08:31.849 "is_configured": true, 00:08:31.849 "data_offset": 2048, 00:08:31.849 "data_size": 63488 00:08:31.849 }, 00:08:31.849 { 00:08:31.849 "name": "BaseBdev2", 00:08:31.849 "uuid": "7ef707ac-1069-5ff9-adf5-0ddd02f81f2a", 00:08:31.849 "is_configured": true, 00:08:31.849 "data_offset": 2048, 00:08:31.849 "data_size": 63488 00:08:31.849 }, 00:08:31.849 { 00:08:31.849 "name": "BaseBdev3", 00:08:31.849 "uuid": "805f6ca5-d74d-5050-9062-bb94cc35bae0", 00:08:31.849 "is_configured": true, 00:08:31.849 "data_offset": 2048, 00:08:31.849 "data_size": 63488 00:08:31.849 } 00:08:31.849 ] 00:08:31.849 }' 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.849 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.108 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.108 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.108 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.368 [2024-11-18 10:36:57.992016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.368 [2024-11-18 10:36:57.992059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.368 [2024-11-18 10:36:57.994660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.368 [2024-11-18 10:36:57.994708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.368 [2024-11-18 10:36:57.994748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.368 [2024-11-18 10:36:57.994757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:32.368 { 00:08:32.368 "results": [ 00:08:32.368 { 00:08:32.368 "job": "raid_bdev1", 00:08:32.368 "core_mask": "0x1", 00:08:32.368 "workload": "randrw", 00:08:32.368 "percentage": 50, 00:08:32.368 "status": "finished", 00:08:32.368 "queue_depth": 1, 00:08:32.368 "io_size": 131072, 00:08:32.368 "runtime": 1.397699, 00:08:32.368 "iops": 14066.691040059412, 00:08:32.368 "mibps": 1758.3363800074264, 00:08:32.368 "io_failed": 1, 00:08:32.368 "io_timeout": 0, 00:08:32.368 "avg_latency_us": 100.20588184865714, 00:08:32.368 "min_latency_us": 21.016593886462882, 00:08:32.368 "max_latency_us": 1523.926637554585 00:08:32.368 } 00:08:32.368 ], 00:08:32.368 "core_count": 1 00:08:32.368 } 00:08:32.368 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.368 10:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65199 00:08:32.368 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65199 ']' 00:08:32.368 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65199 00:08:32.368 10:36:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65199 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65199' 00:08:32.368 killing process with pid 65199 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65199 00:08:32.368 [2024-11-18 10:36:58.028628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.368 10:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65199 00:08:32.628 [2024-11-18 10:36:58.271031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oSJSeNVlrh 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:34.010 ************************************ 00:08:34.010 END TEST raid_read_error_test 00:08:34.010 ************************************ 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:34.010 00:08:34.010 real 0m4.629s 00:08:34.010 user 0m5.402s 00:08:34.010 sys 0m0.641s 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.010 10:36:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 10:36:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:34.010 10:36:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:34.010 10:36:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.010 10:36:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 ************************************ 00:08:34.010 START TEST raid_write_error_test 00:08:34.010 ************************************ 00:08:34.010 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:34.010 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:34.010 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:34.010 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WuEFYHsuQz 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65345 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65345 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65345 ']' 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.011 10:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.011 [2024-11-18 10:36:59.670162] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:34.011 [2024-11-18 10:36:59.670313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65345 ] 00:08:34.011 [2024-11-18 10:36:59.848649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.271 [2024-11-18 10:36:59.981785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.531 [2024-11-18 10:37:00.204247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.531 [2024-11-18 10:37:00.204315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.791 BaseBdev1_malloc 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.791 true 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.791 [2024-11-18 10:37:00.603884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:34.791 [2024-11-18 10:37:00.603940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.791 [2024-11-18 10:37:00.603963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:34.791 [2024-11-18 10:37:00.603974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.791 [2024-11-18 10:37:00.606307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.791 [2024-11-18 10:37:00.606346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:34.791 BaseBdev1 00:08:34.791 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.792 BaseBdev2_malloc 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.792 true 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.792 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.052 [2024-11-18 10:37:00.675328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:35.052 [2024-11-18 10:37:00.675445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.052 [2024-11-18 10:37:00.675469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:35.052 [2024-11-18 10:37:00.675482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.052 [2024-11-18 10:37:00.677930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.052 [2024-11-18 10:37:00.677972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:35.052 BaseBdev2 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.052 BaseBdev3_malloc 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.052 true 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.052 [2024-11-18 10:37:00.781850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:35.052 [2024-11-18 10:37:00.781903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.052 [2024-11-18 10:37:00.781924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:35.052 [2024-11-18 10:37:00.781936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.052 [2024-11-18 10:37:00.784376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.052 [2024-11-18 10:37:00.784489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:35.052 BaseBdev3 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.052 [2024-11-18 10:37:00.793883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.052 [2024-11-18 10:37:00.795940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.052 [2024-11-18 10:37:00.796065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.052 [2024-11-18 10:37:00.796273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.052 [2024-11-18 10:37:00.796288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.052 [2024-11-18 10:37:00.796517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:35.052 [2024-11-18 10:37:00.796663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.052 [2024-11-18 10:37:00.796677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:35.052 [2024-11-18 10:37:00.796800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.052 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.052 "name": "raid_bdev1", 00:08:35.052 "uuid": "aedd99f4-61b1-4e73-9140-d14a1cd1a41e", 00:08:35.052 "strip_size_kb": 64, 00:08:35.052 "state": "online", 00:08:35.052 "raid_level": "raid0", 00:08:35.052 "superblock": true, 00:08:35.052 "num_base_bdevs": 3, 00:08:35.053 "num_base_bdevs_discovered": 3, 00:08:35.053 "num_base_bdevs_operational": 3, 00:08:35.053 "base_bdevs_list": [ 00:08:35.053 { 00:08:35.053 "name": "BaseBdev1", 00:08:35.053 "uuid": "c607af2d-2817-5fe0-a1ba-15895d5d7465", 00:08:35.053 "is_configured": true, 00:08:35.053 "data_offset": 2048, 00:08:35.053 "data_size": 63488 00:08:35.053 }, 00:08:35.053 { 00:08:35.053 "name": "BaseBdev2", 00:08:35.053 "uuid": "2f6582b5-6bef-57a7-95e8-68f22c2239ba", 00:08:35.053 "is_configured": true, 00:08:35.053 "data_offset": 2048, 00:08:35.053 "data_size": 63488 00:08:35.053 }, 00:08:35.053 { 00:08:35.053 "name": "BaseBdev3", 00:08:35.053 "uuid": "ac24d2f1-f5f9-5078-af24-40e424e2b630", 00:08:35.053 "is_configured": true, 00:08:35.053 "data_offset": 2048, 00:08:35.053 "data_size": 63488 00:08:35.053 } 00:08:35.053 ] 00:08:35.053 }' 00:08:35.053 10:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.053 10:37:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.622 10:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:35.622 10:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:35.622 [2024-11-18 10:37:01.326210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:36.562 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.563 "name": "raid_bdev1", 00:08:36.563 "uuid": "aedd99f4-61b1-4e73-9140-d14a1cd1a41e", 00:08:36.563 "strip_size_kb": 64, 00:08:36.563 "state": "online", 00:08:36.563 "raid_level": "raid0", 00:08:36.563 "superblock": true, 00:08:36.563 "num_base_bdevs": 3, 00:08:36.563 "num_base_bdevs_discovered": 3, 00:08:36.563 "num_base_bdevs_operational": 3, 00:08:36.563 "base_bdevs_list": [ 00:08:36.563 { 00:08:36.563 "name": "BaseBdev1", 00:08:36.563 "uuid": "c607af2d-2817-5fe0-a1ba-15895d5d7465", 00:08:36.563 "is_configured": true, 00:08:36.563 "data_offset": 2048, 00:08:36.563 "data_size": 63488 00:08:36.563 }, 00:08:36.563 { 00:08:36.563 "name": "BaseBdev2", 00:08:36.563 "uuid": "2f6582b5-6bef-57a7-95e8-68f22c2239ba", 00:08:36.563 "is_configured": true, 00:08:36.563 "data_offset": 2048, 00:08:36.563 "data_size": 63488 00:08:36.563 }, 00:08:36.563 { 00:08:36.563 "name": "BaseBdev3", 00:08:36.563 "uuid": "ac24d2f1-f5f9-5078-af24-40e424e2b630", 00:08:36.563 "is_configured": true, 00:08:36.563 "data_offset": 2048, 00:08:36.563 "data_size": 63488 00:08:36.563 } 00:08:36.563 ] 00:08:36.563 }' 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.563 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.823 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.823 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.823 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.823 [2024-11-18 10:37:02.690468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.823 [2024-11-18 10:37:02.690581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.823 [2024-11-18 10:37:02.693201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.823 [2024-11-18 10:37:02.693293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.823 [2024-11-18 10:37:02.693354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.823 [2024-11-18 10:37:02.693394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:36.823 { 00:08:36.823 "results": [ 00:08:36.823 { 00:08:36.823 "job": "raid_bdev1", 00:08:36.823 "core_mask": "0x1", 00:08:36.823 "workload": "randrw", 00:08:36.823 "percentage": 50, 00:08:36.823 "status": "finished", 00:08:36.823 "queue_depth": 1, 00:08:36.823 "io_size": 131072, 00:08:36.823 "runtime": 1.365052, 00:08:36.823 "iops": 14737.900094648408, 00:08:36.824 "mibps": 1842.237511831051, 00:08:36.824 "io_failed": 1, 00:08:36.824 "io_timeout": 0, 00:08:36.824 "avg_latency_us": 95.65243100495285, 00:08:36.824 "min_latency_us": 24.705676855895195, 00:08:36.824 "max_latency_us": 1438.071615720524 00:08:36.824 } 00:08:36.824 ], 00:08:36.824 "core_count": 1 00:08:36.824 } 00:08:36.824 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.824 10:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65345 00:08:36.824 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65345 ']' 00:08:36.824 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65345 00:08:36.824 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:36.824 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.084 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65345 00:08:37.084 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.084 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.084 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65345' 00:08:37.084 killing process with pid 65345 00:08:37.084 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65345 00:08:37.084 [2024-11-18 10:37:02.730878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.084 10:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65345 00:08:37.344 [2024-11-18 10:37:02.968716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.304 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WuEFYHsuQz 00:08:38.304 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:38.304 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:38.576 00:08:38.576 real 0m4.616s 00:08:38.576 user 0m5.384s 00:08:38.576 sys 0m0.629s 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.576 ************************************ 00:08:38.576 END TEST raid_write_error_test 00:08:38.576 ************************************ 00:08:38.576 10:37:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.576 10:37:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:38.576 10:37:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:38.576 10:37:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:38.576 10:37:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.576 10:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.576 ************************************ 00:08:38.576 START TEST raid_state_function_test 00:08:38.576 ************************************ 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.576 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65489 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65489' 00:08:38.577 Process raid pid: 65489 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65489 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65489 ']' 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.577 10:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.577 [2024-11-18 10:37:04.354929] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:38.577 [2024-11-18 10:37:04.355149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.837 [2024-11-18 10:37:04.536590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.837 [2024-11-18 10:37:04.672053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.096 [2024-11-18 10:37:04.902259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.097 [2024-11-18 10:37:04.902360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.356 [2024-11-18 10:37:05.181001] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.356 [2024-11-18 10:37:05.181120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.356 [2024-11-18 10:37:05.181150] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.356 [2024-11-18 10:37:05.181184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.356 [2024-11-18 10:37:05.181204] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.356 [2024-11-18 10:37:05.181226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.356 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.357 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.357 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.357 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.357 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.616 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.616 "name": "Existed_Raid", 00:08:39.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.616 "strip_size_kb": 64, 00:08:39.616 "state": "configuring", 00:08:39.616 "raid_level": "concat", 00:08:39.616 "superblock": false, 00:08:39.616 "num_base_bdevs": 3, 00:08:39.616 "num_base_bdevs_discovered": 0, 00:08:39.616 "num_base_bdevs_operational": 3, 00:08:39.616 "base_bdevs_list": [ 00:08:39.616 { 00:08:39.616 "name": "BaseBdev1", 00:08:39.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.616 "is_configured": false, 00:08:39.616 "data_offset": 0, 00:08:39.616 "data_size": 0 00:08:39.616 }, 00:08:39.616 { 00:08:39.616 "name": "BaseBdev2", 00:08:39.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.616 "is_configured": false, 00:08:39.616 "data_offset": 0, 00:08:39.616 "data_size": 0 00:08:39.616 }, 00:08:39.616 { 00:08:39.616 "name": "BaseBdev3", 00:08:39.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.616 "is_configured": false, 00:08:39.616 "data_offset": 0, 00:08:39.616 "data_size": 0 00:08:39.616 } 00:08:39.616 ] 00:08:39.616 }' 00:08:39.616 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.616 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 [2024-11-18 10:37:05.564321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.877 [2024-11-18 10:37:05.564401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.877 [2024-11-18 10:37:05.576316] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.877 [2024-11-18 10:37:05.576361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.877 [2024-11-18 10:37:05.576370] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.877 [2024-11-18 10:37:05.576380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.877 [2024-11-18 10:37:05.576386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.877 [2024-11-18 10:37:05.576396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.877 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 [2024-11-18 10:37:05.628665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.878 BaseBdev1 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 [ 00:08:39.878 { 00:08:39.878 "name": "BaseBdev1", 00:08:39.878 "aliases": [ 00:08:39.878 "65064b4b-3b62-472c-9774-9b3dcee9616a" 00:08:39.878 ], 00:08:39.878 "product_name": "Malloc disk", 00:08:39.878 "block_size": 512, 00:08:39.878 "num_blocks": 65536, 00:08:39.878 "uuid": "65064b4b-3b62-472c-9774-9b3dcee9616a", 00:08:39.878 "assigned_rate_limits": { 00:08:39.878 "rw_ios_per_sec": 0, 00:08:39.878 "rw_mbytes_per_sec": 0, 00:08:39.878 "r_mbytes_per_sec": 0, 00:08:39.878 "w_mbytes_per_sec": 0 00:08:39.878 }, 00:08:39.878 "claimed": true, 00:08:39.878 "claim_type": "exclusive_write", 00:08:39.878 "zoned": false, 00:08:39.878 "supported_io_types": { 00:08:39.878 "read": true, 00:08:39.878 "write": true, 00:08:39.878 "unmap": true, 00:08:39.878 "flush": true, 00:08:39.878 "reset": true, 00:08:39.878 "nvme_admin": false, 00:08:39.878 "nvme_io": false, 00:08:39.878 "nvme_io_md": false, 00:08:39.878 "write_zeroes": true, 00:08:39.878 "zcopy": true, 00:08:39.878 "get_zone_info": false, 00:08:39.878 "zone_management": false, 00:08:39.878 "zone_append": false, 00:08:39.878 "compare": false, 00:08:39.878 "compare_and_write": false, 00:08:39.878 "abort": true, 00:08:39.878 "seek_hole": false, 00:08:39.878 "seek_data": false, 00:08:39.878 "copy": true, 00:08:39.878 "nvme_iov_md": false 00:08:39.878 }, 00:08:39.878 "memory_domains": [ 00:08:39.878 { 00:08:39.878 "dma_device_id": "system", 00:08:39.878 "dma_device_type": 1 00:08:39.878 }, 00:08:39.878 { 00:08:39.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.878 "dma_device_type": 2 00:08:39.878 } 00:08:39.878 ], 00:08:39.878 "driver_specific": {} 00:08:39.878 } 00:08:39.878 ] 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.878 "name": "Existed_Raid", 00:08:39.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.878 "strip_size_kb": 64, 00:08:39.878 "state": "configuring", 00:08:39.878 "raid_level": "concat", 00:08:39.878 "superblock": false, 00:08:39.878 "num_base_bdevs": 3, 00:08:39.878 "num_base_bdevs_discovered": 1, 00:08:39.878 "num_base_bdevs_operational": 3, 00:08:39.878 "base_bdevs_list": [ 00:08:39.878 { 00:08:39.878 "name": "BaseBdev1", 00:08:39.878 "uuid": "65064b4b-3b62-472c-9774-9b3dcee9616a", 00:08:39.878 "is_configured": true, 00:08:39.878 "data_offset": 0, 00:08:39.878 "data_size": 65536 00:08:39.878 }, 00:08:39.878 { 00:08:39.878 "name": "BaseBdev2", 00:08:39.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.878 "is_configured": false, 00:08:39.878 "data_offset": 0, 00:08:39.878 "data_size": 0 00:08:39.878 }, 00:08:39.878 { 00:08:39.878 "name": "BaseBdev3", 00:08:39.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.878 "is_configured": false, 00:08:39.878 "data_offset": 0, 00:08:39.878 "data_size": 0 00:08:39.878 } 00:08:39.878 ] 00:08:39.878 }' 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.878 10:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.448 [2024-11-18 10:37:06.099880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.448 [2024-11-18 10:37:06.099927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.448 [2024-11-18 10:37:06.111918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.448 [2024-11-18 10:37:06.113972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.448 [2024-11-18 10:37:06.114015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.448 [2024-11-18 10:37:06.114024] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.448 [2024-11-18 10:37:06.114032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.448 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.449 "name": "Existed_Raid", 00:08:40.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.449 "strip_size_kb": 64, 00:08:40.449 "state": "configuring", 00:08:40.449 "raid_level": "concat", 00:08:40.449 "superblock": false, 00:08:40.449 "num_base_bdevs": 3, 00:08:40.449 "num_base_bdevs_discovered": 1, 00:08:40.449 "num_base_bdevs_operational": 3, 00:08:40.449 "base_bdevs_list": [ 00:08:40.449 { 00:08:40.449 "name": "BaseBdev1", 00:08:40.449 "uuid": "65064b4b-3b62-472c-9774-9b3dcee9616a", 00:08:40.449 "is_configured": true, 00:08:40.449 "data_offset": 0, 00:08:40.449 "data_size": 65536 00:08:40.449 }, 00:08:40.449 { 00:08:40.449 "name": "BaseBdev2", 00:08:40.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.449 "is_configured": false, 00:08:40.449 "data_offset": 0, 00:08:40.449 "data_size": 0 00:08:40.449 }, 00:08:40.449 { 00:08:40.449 "name": "BaseBdev3", 00:08:40.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.449 "is_configured": false, 00:08:40.449 "data_offset": 0, 00:08:40.449 "data_size": 0 00:08:40.449 } 00:08:40.449 ] 00:08:40.449 }' 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.449 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.709 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.709 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.709 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.969 [2024-11-18 10:37:06.594761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.969 BaseBdev2 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.969 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.969 [ 00:08:40.969 { 00:08:40.969 "name": "BaseBdev2", 00:08:40.969 "aliases": [ 00:08:40.969 "827f37f2-c0d5-4711-9e99-9b50df315b38" 00:08:40.969 ], 00:08:40.969 "product_name": "Malloc disk", 00:08:40.969 "block_size": 512, 00:08:40.969 "num_blocks": 65536, 00:08:40.969 "uuid": "827f37f2-c0d5-4711-9e99-9b50df315b38", 00:08:40.969 "assigned_rate_limits": { 00:08:40.969 "rw_ios_per_sec": 0, 00:08:40.969 "rw_mbytes_per_sec": 0, 00:08:40.969 "r_mbytes_per_sec": 0, 00:08:40.969 "w_mbytes_per_sec": 0 00:08:40.969 }, 00:08:40.969 "claimed": true, 00:08:40.969 "claim_type": "exclusive_write", 00:08:40.969 "zoned": false, 00:08:40.969 "supported_io_types": { 00:08:40.969 "read": true, 00:08:40.969 "write": true, 00:08:40.969 "unmap": true, 00:08:40.969 "flush": true, 00:08:40.969 "reset": true, 00:08:40.969 "nvme_admin": false, 00:08:40.969 "nvme_io": false, 00:08:40.970 "nvme_io_md": false, 00:08:40.970 "write_zeroes": true, 00:08:40.970 "zcopy": true, 00:08:40.970 "get_zone_info": false, 00:08:40.970 "zone_management": false, 00:08:40.970 "zone_append": false, 00:08:40.970 "compare": false, 00:08:40.970 "compare_and_write": false, 00:08:40.970 "abort": true, 00:08:40.970 "seek_hole": false, 00:08:40.970 "seek_data": false, 00:08:40.970 "copy": true, 00:08:40.970 "nvme_iov_md": false 00:08:40.970 }, 00:08:40.970 "memory_domains": [ 00:08:40.970 { 00:08:40.970 "dma_device_id": "system", 00:08:40.970 "dma_device_type": 1 00:08:40.970 }, 00:08:40.970 { 00:08:40.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.970 "dma_device_type": 2 00:08:40.970 } 00:08:40.970 ], 00:08:40.970 "driver_specific": {} 00:08:40.970 } 00:08:40.970 ] 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.970 "name": "Existed_Raid", 00:08:40.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.970 "strip_size_kb": 64, 00:08:40.970 "state": "configuring", 00:08:40.970 "raid_level": "concat", 00:08:40.970 "superblock": false, 00:08:40.970 "num_base_bdevs": 3, 00:08:40.970 "num_base_bdevs_discovered": 2, 00:08:40.970 "num_base_bdevs_operational": 3, 00:08:40.970 "base_bdevs_list": [ 00:08:40.970 { 00:08:40.970 "name": "BaseBdev1", 00:08:40.970 "uuid": "65064b4b-3b62-472c-9774-9b3dcee9616a", 00:08:40.970 "is_configured": true, 00:08:40.970 "data_offset": 0, 00:08:40.970 "data_size": 65536 00:08:40.970 }, 00:08:40.970 { 00:08:40.970 "name": "BaseBdev2", 00:08:40.970 "uuid": "827f37f2-c0d5-4711-9e99-9b50df315b38", 00:08:40.970 "is_configured": true, 00:08:40.970 "data_offset": 0, 00:08:40.970 "data_size": 65536 00:08:40.970 }, 00:08:40.970 { 00:08:40.970 "name": "BaseBdev3", 00:08:40.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.970 "is_configured": false, 00:08:40.970 "data_offset": 0, 00:08:40.970 "data_size": 0 00:08:40.970 } 00:08:40.970 ] 00:08:40.970 }' 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.970 10:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.230 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.230 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.230 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.491 [2024-11-18 10:37:07.117469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.491 [2024-11-18 10:37:07.117573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.491 [2024-11-18 10:37:07.117593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:41.491 [2024-11-18 10:37:07.117912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:41.491 [2024-11-18 10:37:07.118106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.491 [2024-11-18 10:37:07.118116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.491 [2024-11-18 10:37:07.118431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.491 BaseBdev3 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.491 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.492 [ 00:08:41.492 { 00:08:41.492 "name": "BaseBdev3", 00:08:41.492 "aliases": [ 00:08:41.492 "80622560-b99f-4395-8f0a-befec335da4c" 00:08:41.492 ], 00:08:41.492 "product_name": "Malloc disk", 00:08:41.492 "block_size": 512, 00:08:41.492 "num_blocks": 65536, 00:08:41.492 "uuid": "80622560-b99f-4395-8f0a-befec335da4c", 00:08:41.492 "assigned_rate_limits": { 00:08:41.492 "rw_ios_per_sec": 0, 00:08:41.492 "rw_mbytes_per_sec": 0, 00:08:41.492 "r_mbytes_per_sec": 0, 00:08:41.492 "w_mbytes_per_sec": 0 00:08:41.492 }, 00:08:41.492 "claimed": true, 00:08:41.492 "claim_type": "exclusive_write", 00:08:41.492 "zoned": false, 00:08:41.492 "supported_io_types": { 00:08:41.492 "read": true, 00:08:41.492 "write": true, 00:08:41.492 "unmap": true, 00:08:41.492 "flush": true, 00:08:41.492 "reset": true, 00:08:41.492 "nvme_admin": false, 00:08:41.492 "nvme_io": false, 00:08:41.492 "nvme_io_md": false, 00:08:41.492 "write_zeroes": true, 00:08:41.492 "zcopy": true, 00:08:41.492 "get_zone_info": false, 00:08:41.492 "zone_management": false, 00:08:41.492 "zone_append": false, 00:08:41.492 "compare": false, 00:08:41.492 "compare_and_write": false, 00:08:41.492 "abort": true, 00:08:41.492 "seek_hole": false, 00:08:41.492 "seek_data": false, 00:08:41.492 "copy": true, 00:08:41.492 "nvme_iov_md": false 00:08:41.492 }, 00:08:41.492 "memory_domains": [ 00:08:41.492 { 00:08:41.492 "dma_device_id": "system", 00:08:41.492 "dma_device_type": 1 00:08:41.492 }, 00:08:41.492 { 00:08:41.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.492 "dma_device_type": 2 00:08:41.492 } 00:08:41.492 ], 00:08:41.492 "driver_specific": {} 00:08:41.492 } 00:08:41.492 ] 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.492 "name": "Existed_Raid", 00:08:41.492 "uuid": "0c29dd84-4a27-40d5-88e0-ded66df2e6bd", 00:08:41.492 "strip_size_kb": 64, 00:08:41.492 "state": "online", 00:08:41.492 "raid_level": "concat", 00:08:41.492 "superblock": false, 00:08:41.492 "num_base_bdevs": 3, 00:08:41.492 "num_base_bdevs_discovered": 3, 00:08:41.492 "num_base_bdevs_operational": 3, 00:08:41.492 "base_bdevs_list": [ 00:08:41.492 { 00:08:41.492 "name": "BaseBdev1", 00:08:41.492 "uuid": "65064b4b-3b62-472c-9774-9b3dcee9616a", 00:08:41.492 "is_configured": true, 00:08:41.492 "data_offset": 0, 00:08:41.492 "data_size": 65536 00:08:41.492 }, 00:08:41.492 { 00:08:41.492 "name": "BaseBdev2", 00:08:41.492 "uuid": "827f37f2-c0d5-4711-9e99-9b50df315b38", 00:08:41.492 "is_configured": true, 00:08:41.492 "data_offset": 0, 00:08:41.492 "data_size": 65536 00:08:41.492 }, 00:08:41.492 { 00:08:41.492 "name": "BaseBdev3", 00:08:41.492 "uuid": "80622560-b99f-4395-8f0a-befec335da4c", 00:08:41.492 "is_configured": true, 00:08:41.492 "data_offset": 0, 00:08:41.492 "data_size": 65536 00:08:41.492 } 00:08:41.492 ] 00:08:41.492 }' 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.492 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.752 [2024-11-18 10:37:07.612913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.752 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.012 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.012 "name": "Existed_Raid", 00:08:42.012 "aliases": [ 00:08:42.012 "0c29dd84-4a27-40d5-88e0-ded66df2e6bd" 00:08:42.012 ], 00:08:42.012 "product_name": "Raid Volume", 00:08:42.012 "block_size": 512, 00:08:42.012 "num_blocks": 196608, 00:08:42.012 "uuid": "0c29dd84-4a27-40d5-88e0-ded66df2e6bd", 00:08:42.012 "assigned_rate_limits": { 00:08:42.012 "rw_ios_per_sec": 0, 00:08:42.012 "rw_mbytes_per_sec": 0, 00:08:42.012 "r_mbytes_per_sec": 0, 00:08:42.012 "w_mbytes_per_sec": 0 00:08:42.012 }, 00:08:42.012 "claimed": false, 00:08:42.012 "zoned": false, 00:08:42.012 "supported_io_types": { 00:08:42.012 "read": true, 00:08:42.012 "write": true, 00:08:42.012 "unmap": true, 00:08:42.012 "flush": true, 00:08:42.012 "reset": true, 00:08:42.012 "nvme_admin": false, 00:08:42.012 "nvme_io": false, 00:08:42.012 "nvme_io_md": false, 00:08:42.012 "write_zeroes": true, 00:08:42.012 "zcopy": false, 00:08:42.012 "get_zone_info": false, 00:08:42.012 "zone_management": false, 00:08:42.012 "zone_append": false, 00:08:42.012 "compare": false, 00:08:42.012 "compare_and_write": false, 00:08:42.012 "abort": false, 00:08:42.012 "seek_hole": false, 00:08:42.012 "seek_data": false, 00:08:42.012 "copy": false, 00:08:42.012 "nvme_iov_md": false 00:08:42.012 }, 00:08:42.012 "memory_domains": [ 00:08:42.012 { 00:08:42.012 "dma_device_id": "system", 00:08:42.012 "dma_device_type": 1 00:08:42.012 }, 00:08:42.012 { 00:08:42.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.012 "dma_device_type": 2 00:08:42.012 }, 00:08:42.012 { 00:08:42.012 "dma_device_id": "system", 00:08:42.012 "dma_device_type": 1 00:08:42.012 }, 00:08:42.012 { 00:08:42.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.012 "dma_device_type": 2 00:08:42.012 }, 00:08:42.012 { 00:08:42.012 "dma_device_id": "system", 00:08:42.012 "dma_device_type": 1 00:08:42.012 }, 00:08:42.012 { 00:08:42.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.012 "dma_device_type": 2 00:08:42.012 } 00:08:42.012 ], 00:08:42.012 "driver_specific": { 00:08:42.012 "raid": { 00:08:42.012 "uuid": "0c29dd84-4a27-40d5-88e0-ded66df2e6bd", 00:08:42.012 "strip_size_kb": 64, 00:08:42.012 "state": "online", 00:08:42.012 "raid_level": "concat", 00:08:42.012 "superblock": false, 00:08:42.012 "num_base_bdevs": 3, 00:08:42.012 "num_base_bdevs_discovered": 3, 00:08:42.013 "num_base_bdevs_operational": 3, 00:08:42.013 "base_bdevs_list": [ 00:08:42.013 { 00:08:42.013 "name": "BaseBdev1", 00:08:42.013 "uuid": "65064b4b-3b62-472c-9774-9b3dcee9616a", 00:08:42.013 "is_configured": true, 00:08:42.013 "data_offset": 0, 00:08:42.013 "data_size": 65536 00:08:42.013 }, 00:08:42.013 { 00:08:42.013 "name": "BaseBdev2", 00:08:42.013 "uuid": "827f37f2-c0d5-4711-9e99-9b50df315b38", 00:08:42.013 "is_configured": true, 00:08:42.013 "data_offset": 0, 00:08:42.013 "data_size": 65536 00:08:42.013 }, 00:08:42.013 { 00:08:42.013 "name": "BaseBdev3", 00:08:42.013 "uuid": "80622560-b99f-4395-8f0a-befec335da4c", 00:08:42.013 "is_configured": true, 00:08:42.013 "data_offset": 0, 00:08:42.013 "data_size": 65536 00:08:42.013 } 00:08:42.013 ] 00:08:42.013 } 00:08:42.013 } 00:08:42.013 }' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.013 BaseBdev2 00:08:42.013 BaseBdev3' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.013 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.013 [2024-11-18 10:37:07.848289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.013 [2024-11-18 10:37:07.848314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.013 [2024-11-18 10:37:07.848364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.273 10:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.273 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.273 "name": "Existed_Raid", 00:08:42.273 "uuid": "0c29dd84-4a27-40d5-88e0-ded66df2e6bd", 00:08:42.273 "strip_size_kb": 64, 00:08:42.273 "state": "offline", 00:08:42.273 "raid_level": "concat", 00:08:42.273 "superblock": false, 00:08:42.273 "num_base_bdevs": 3, 00:08:42.273 "num_base_bdevs_discovered": 2, 00:08:42.273 "num_base_bdevs_operational": 2, 00:08:42.273 "base_bdevs_list": [ 00:08:42.273 { 00:08:42.273 "name": null, 00:08:42.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.273 "is_configured": false, 00:08:42.273 "data_offset": 0, 00:08:42.273 "data_size": 65536 00:08:42.273 }, 00:08:42.273 { 00:08:42.273 "name": "BaseBdev2", 00:08:42.273 "uuid": "827f37f2-c0d5-4711-9e99-9b50df315b38", 00:08:42.273 "is_configured": true, 00:08:42.273 "data_offset": 0, 00:08:42.273 "data_size": 65536 00:08:42.273 }, 00:08:42.273 { 00:08:42.273 "name": "BaseBdev3", 00:08:42.273 "uuid": "80622560-b99f-4395-8f0a-befec335da4c", 00:08:42.273 "is_configured": true, 00:08:42.273 "data_offset": 0, 00:08:42.273 "data_size": 65536 00:08:42.273 } 00:08:42.273 ] 00:08:42.273 }' 00:08:42.273 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.273 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.533 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 [2024-11-18 10:37:08.388068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.794 [2024-11-18 10:37:08.547815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.794 [2024-11-18 10:37:08.547943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.794 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 BaseBdev2 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 [ 00:08:43.055 { 00:08:43.055 "name": "BaseBdev2", 00:08:43.055 "aliases": [ 00:08:43.055 "9b23a81d-7b0c-40cf-ba00-8ae66b847792" 00:08:43.055 ], 00:08:43.055 "product_name": "Malloc disk", 00:08:43.055 "block_size": 512, 00:08:43.055 "num_blocks": 65536, 00:08:43.055 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:43.055 "assigned_rate_limits": { 00:08:43.055 "rw_ios_per_sec": 0, 00:08:43.055 "rw_mbytes_per_sec": 0, 00:08:43.055 "r_mbytes_per_sec": 0, 00:08:43.055 "w_mbytes_per_sec": 0 00:08:43.055 }, 00:08:43.055 "claimed": false, 00:08:43.055 "zoned": false, 00:08:43.055 "supported_io_types": { 00:08:43.055 "read": true, 00:08:43.055 "write": true, 00:08:43.055 "unmap": true, 00:08:43.055 "flush": true, 00:08:43.055 "reset": true, 00:08:43.055 "nvme_admin": false, 00:08:43.055 "nvme_io": false, 00:08:43.055 "nvme_io_md": false, 00:08:43.055 "write_zeroes": true, 00:08:43.055 "zcopy": true, 00:08:43.055 "get_zone_info": false, 00:08:43.055 "zone_management": false, 00:08:43.055 "zone_append": false, 00:08:43.055 "compare": false, 00:08:43.055 "compare_and_write": false, 00:08:43.055 "abort": true, 00:08:43.055 "seek_hole": false, 00:08:43.055 "seek_data": false, 00:08:43.055 "copy": true, 00:08:43.055 "nvme_iov_md": false 00:08:43.055 }, 00:08:43.055 "memory_domains": [ 00:08:43.055 { 00:08:43.055 "dma_device_id": "system", 00:08:43.055 "dma_device_type": 1 00:08:43.055 }, 00:08:43.055 { 00:08:43.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.055 "dma_device_type": 2 00:08:43.055 } 00:08:43.055 ], 00:08:43.055 "driver_specific": {} 00:08:43.055 } 00:08:43.055 ] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 BaseBdev3 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 [ 00:08:43.055 { 00:08:43.055 "name": "BaseBdev3", 00:08:43.055 "aliases": [ 00:08:43.055 "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa" 00:08:43.055 ], 00:08:43.055 "product_name": "Malloc disk", 00:08:43.055 "block_size": 512, 00:08:43.055 "num_blocks": 65536, 00:08:43.055 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:43.055 "assigned_rate_limits": { 00:08:43.055 "rw_ios_per_sec": 0, 00:08:43.055 "rw_mbytes_per_sec": 0, 00:08:43.055 "r_mbytes_per_sec": 0, 00:08:43.055 "w_mbytes_per_sec": 0 00:08:43.055 }, 00:08:43.055 "claimed": false, 00:08:43.055 "zoned": false, 00:08:43.055 "supported_io_types": { 00:08:43.055 "read": true, 00:08:43.055 "write": true, 00:08:43.055 "unmap": true, 00:08:43.055 "flush": true, 00:08:43.055 "reset": true, 00:08:43.055 "nvme_admin": false, 00:08:43.055 "nvme_io": false, 00:08:43.055 "nvme_io_md": false, 00:08:43.055 "write_zeroes": true, 00:08:43.055 "zcopy": true, 00:08:43.055 "get_zone_info": false, 00:08:43.055 "zone_management": false, 00:08:43.055 "zone_append": false, 00:08:43.055 "compare": false, 00:08:43.055 "compare_and_write": false, 00:08:43.055 "abort": true, 00:08:43.055 "seek_hole": false, 00:08:43.055 "seek_data": false, 00:08:43.055 "copy": true, 00:08:43.055 "nvme_iov_md": false 00:08:43.055 }, 00:08:43.055 "memory_domains": [ 00:08:43.055 { 00:08:43.055 "dma_device_id": "system", 00:08:43.055 "dma_device_type": 1 00:08:43.055 }, 00:08:43.055 { 00:08:43.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.055 "dma_device_type": 2 00:08:43.055 } 00:08:43.055 ], 00:08:43.055 "driver_specific": {} 00:08:43.055 } 00:08:43.055 ] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 [2024-11-18 10:37:08.866000] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.055 [2024-11-18 10:37:08.866086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.055 [2024-11-18 10:37:08.866126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.055 [2024-11-18 10:37:08.868177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.055 "name": "Existed_Raid", 00:08:43.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.055 "strip_size_kb": 64, 00:08:43.055 "state": "configuring", 00:08:43.055 "raid_level": "concat", 00:08:43.055 "superblock": false, 00:08:43.055 "num_base_bdevs": 3, 00:08:43.055 "num_base_bdevs_discovered": 2, 00:08:43.055 "num_base_bdevs_operational": 3, 00:08:43.055 "base_bdevs_list": [ 00:08:43.055 { 00:08:43.055 "name": "BaseBdev1", 00:08:43.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.055 "is_configured": false, 00:08:43.055 "data_offset": 0, 00:08:43.055 "data_size": 0 00:08:43.055 }, 00:08:43.055 { 00:08:43.055 "name": "BaseBdev2", 00:08:43.055 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:43.055 "is_configured": true, 00:08:43.055 "data_offset": 0, 00:08:43.055 "data_size": 65536 00:08:43.055 }, 00:08:43.055 { 00:08:43.055 "name": "BaseBdev3", 00:08:43.055 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:43.055 "is_configured": true, 00:08:43.055 "data_offset": 0, 00:08:43.055 "data_size": 65536 00:08:43.055 } 00:08:43.055 ] 00:08:43.055 }' 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.055 10:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.624 [2024-11-18 10:37:09.293295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.624 "name": "Existed_Raid", 00:08:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.624 "strip_size_kb": 64, 00:08:43.624 "state": "configuring", 00:08:43.624 "raid_level": "concat", 00:08:43.624 "superblock": false, 00:08:43.624 "num_base_bdevs": 3, 00:08:43.624 "num_base_bdevs_discovered": 1, 00:08:43.624 "num_base_bdevs_operational": 3, 00:08:43.624 "base_bdevs_list": [ 00:08:43.624 { 00:08:43.624 "name": "BaseBdev1", 00:08:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.624 "is_configured": false, 00:08:43.624 "data_offset": 0, 00:08:43.624 "data_size": 0 00:08:43.624 }, 00:08:43.624 { 00:08:43.624 "name": null, 00:08:43.624 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:43.624 "is_configured": false, 00:08:43.624 "data_offset": 0, 00:08:43.624 "data_size": 65536 00:08:43.624 }, 00:08:43.624 { 00:08:43.624 "name": "BaseBdev3", 00:08:43.624 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:43.624 "is_configured": true, 00:08:43.624 "data_offset": 0, 00:08:43.624 "data_size": 65536 00:08:43.624 } 00:08:43.624 ] 00:08:43.624 }' 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.624 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.884 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.884 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.885 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.885 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.885 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.145 [2024-11-18 10:37:09.820685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.145 BaseBdev1 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.145 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.146 [ 00:08:44.146 { 00:08:44.146 "name": "BaseBdev1", 00:08:44.146 "aliases": [ 00:08:44.146 "6578838f-daf2-4c54-9da2-eed636c350bb" 00:08:44.146 ], 00:08:44.146 "product_name": "Malloc disk", 00:08:44.146 "block_size": 512, 00:08:44.146 "num_blocks": 65536, 00:08:44.146 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:44.146 "assigned_rate_limits": { 00:08:44.146 "rw_ios_per_sec": 0, 00:08:44.146 "rw_mbytes_per_sec": 0, 00:08:44.146 "r_mbytes_per_sec": 0, 00:08:44.146 "w_mbytes_per_sec": 0 00:08:44.146 }, 00:08:44.146 "claimed": true, 00:08:44.146 "claim_type": "exclusive_write", 00:08:44.146 "zoned": false, 00:08:44.146 "supported_io_types": { 00:08:44.146 "read": true, 00:08:44.146 "write": true, 00:08:44.146 "unmap": true, 00:08:44.146 "flush": true, 00:08:44.146 "reset": true, 00:08:44.146 "nvme_admin": false, 00:08:44.146 "nvme_io": false, 00:08:44.146 "nvme_io_md": false, 00:08:44.146 "write_zeroes": true, 00:08:44.146 "zcopy": true, 00:08:44.146 "get_zone_info": false, 00:08:44.146 "zone_management": false, 00:08:44.146 "zone_append": false, 00:08:44.146 "compare": false, 00:08:44.146 "compare_and_write": false, 00:08:44.146 "abort": true, 00:08:44.146 "seek_hole": false, 00:08:44.146 "seek_data": false, 00:08:44.146 "copy": true, 00:08:44.146 "nvme_iov_md": false 00:08:44.146 }, 00:08:44.146 "memory_domains": [ 00:08:44.146 { 00:08:44.146 "dma_device_id": "system", 00:08:44.146 "dma_device_type": 1 00:08:44.146 }, 00:08:44.146 { 00:08:44.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.146 "dma_device_type": 2 00:08:44.146 } 00:08:44.146 ], 00:08:44.146 "driver_specific": {} 00:08:44.146 } 00:08:44.146 ] 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.146 "name": "Existed_Raid", 00:08:44.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.146 "strip_size_kb": 64, 00:08:44.146 "state": "configuring", 00:08:44.146 "raid_level": "concat", 00:08:44.146 "superblock": false, 00:08:44.146 "num_base_bdevs": 3, 00:08:44.146 "num_base_bdevs_discovered": 2, 00:08:44.146 "num_base_bdevs_operational": 3, 00:08:44.146 "base_bdevs_list": [ 00:08:44.146 { 00:08:44.146 "name": "BaseBdev1", 00:08:44.146 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:44.146 "is_configured": true, 00:08:44.146 "data_offset": 0, 00:08:44.146 "data_size": 65536 00:08:44.146 }, 00:08:44.146 { 00:08:44.146 "name": null, 00:08:44.146 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:44.146 "is_configured": false, 00:08:44.146 "data_offset": 0, 00:08:44.146 "data_size": 65536 00:08:44.146 }, 00:08:44.146 { 00:08:44.146 "name": "BaseBdev3", 00:08:44.146 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:44.146 "is_configured": true, 00:08:44.146 "data_offset": 0, 00:08:44.146 "data_size": 65536 00:08:44.146 } 00:08:44.146 ] 00:08:44.146 }' 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.146 10:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.406 [2024-11-18 10:37:10.279960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.406 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.666 "name": "Existed_Raid", 00:08:44.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.666 "strip_size_kb": 64, 00:08:44.666 "state": "configuring", 00:08:44.666 "raid_level": "concat", 00:08:44.666 "superblock": false, 00:08:44.666 "num_base_bdevs": 3, 00:08:44.666 "num_base_bdevs_discovered": 1, 00:08:44.666 "num_base_bdevs_operational": 3, 00:08:44.666 "base_bdevs_list": [ 00:08:44.666 { 00:08:44.666 "name": "BaseBdev1", 00:08:44.666 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:44.666 "is_configured": true, 00:08:44.666 "data_offset": 0, 00:08:44.666 "data_size": 65536 00:08:44.666 }, 00:08:44.666 { 00:08:44.666 "name": null, 00:08:44.666 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:44.666 "is_configured": false, 00:08:44.666 "data_offset": 0, 00:08:44.666 "data_size": 65536 00:08:44.666 }, 00:08:44.666 { 00:08:44.666 "name": null, 00:08:44.666 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:44.666 "is_configured": false, 00:08:44.666 "data_offset": 0, 00:08:44.666 "data_size": 65536 00:08:44.666 } 00:08:44.666 ] 00:08:44.666 }' 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.666 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.926 [2024-11-18 10:37:10.699252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.926 "name": "Existed_Raid", 00:08:44.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.926 "strip_size_kb": 64, 00:08:44.926 "state": "configuring", 00:08:44.926 "raid_level": "concat", 00:08:44.926 "superblock": false, 00:08:44.926 "num_base_bdevs": 3, 00:08:44.926 "num_base_bdevs_discovered": 2, 00:08:44.926 "num_base_bdevs_operational": 3, 00:08:44.926 "base_bdevs_list": [ 00:08:44.926 { 00:08:44.926 "name": "BaseBdev1", 00:08:44.926 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:44.926 "is_configured": true, 00:08:44.926 "data_offset": 0, 00:08:44.926 "data_size": 65536 00:08:44.926 }, 00:08:44.926 { 00:08:44.926 "name": null, 00:08:44.926 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:44.926 "is_configured": false, 00:08:44.926 "data_offset": 0, 00:08:44.926 "data_size": 65536 00:08:44.926 }, 00:08:44.926 { 00:08:44.926 "name": "BaseBdev3", 00:08:44.926 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:44.926 "is_configured": true, 00:08:44.926 "data_offset": 0, 00:08:44.926 "data_size": 65536 00:08:44.926 } 00:08:44.926 ] 00:08:44.926 }' 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.926 10:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.497 [2024-11-18 10:37:11.218794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.497 "name": "Existed_Raid", 00:08:45.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.497 "strip_size_kb": 64, 00:08:45.497 "state": "configuring", 00:08:45.497 "raid_level": "concat", 00:08:45.497 "superblock": false, 00:08:45.497 "num_base_bdevs": 3, 00:08:45.497 "num_base_bdevs_discovered": 1, 00:08:45.497 "num_base_bdevs_operational": 3, 00:08:45.497 "base_bdevs_list": [ 00:08:45.497 { 00:08:45.497 "name": null, 00:08:45.497 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:45.497 "is_configured": false, 00:08:45.497 "data_offset": 0, 00:08:45.497 "data_size": 65536 00:08:45.497 }, 00:08:45.497 { 00:08:45.497 "name": null, 00:08:45.497 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:45.497 "is_configured": false, 00:08:45.497 "data_offset": 0, 00:08:45.497 "data_size": 65536 00:08:45.497 }, 00:08:45.497 { 00:08:45.497 "name": "BaseBdev3", 00:08:45.497 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:45.497 "is_configured": true, 00:08:45.497 "data_offset": 0, 00:08:45.497 "data_size": 65536 00:08:45.497 } 00:08:45.497 ] 00:08:45.497 }' 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.497 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.067 [2024-11-18 10:37:11.748969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.067 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.068 "name": "Existed_Raid", 00:08:46.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.068 "strip_size_kb": 64, 00:08:46.068 "state": "configuring", 00:08:46.068 "raid_level": "concat", 00:08:46.068 "superblock": false, 00:08:46.068 "num_base_bdevs": 3, 00:08:46.068 "num_base_bdevs_discovered": 2, 00:08:46.068 "num_base_bdevs_operational": 3, 00:08:46.068 "base_bdevs_list": [ 00:08:46.068 { 00:08:46.068 "name": null, 00:08:46.068 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:46.068 "is_configured": false, 00:08:46.068 "data_offset": 0, 00:08:46.068 "data_size": 65536 00:08:46.068 }, 00:08:46.068 { 00:08:46.068 "name": "BaseBdev2", 00:08:46.068 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:46.068 "is_configured": true, 00:08:46.068 "data_offset": 0, 00:08:46.068 "data_size": 65536 00:08:46.068 }, 00:08:46.068 { 00:08:46.068 "name": "BaseBdev3", 00:08:46.068 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:46.068 "is_configured": true, 00:08:46.068 "data_offset": 0, 00:08:46.068 "data_size": 65536 00:08:46.068 } 00:08:46.068 ] 00:08:46.068 }' 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.068 10:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:46.328 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.588 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6578838f-daf2-4c54-9da2-eed636c350bb 00:08:46.588 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.588 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.588 [2024-11-18 10:37:12.269751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:46.588 [2024-11-18 10:37:12.269857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.588 [2024-11-18 10:37:12.269873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:46.588 [2024-11-18 10:37:12.270153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:46.589 [2024-11-18 10:37:12.270339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.589 [2024-11-18 10:37:12.270350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:46.589 [2024-11-18 10:37:12.270616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.589 NewBaseBdev 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.589 [ 00:08:46.589 { 00:08:46.589 "name": "NewBaseBdev", 00:08:46.589 "aliases": [ 00:08:46.589 "6578838f-daf2-4c54-9da2-eed636c350bb" 00:08:46.589 ], 00:08:46.589 "product_name": "Malloc disk", 00:08:46.589 "block_size": 512, 00:08:46.589 "num_blocks": 65536, 00:08:46.589 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:46.589 "assigned_rate_limits": { 00:08:46.589 "rw_ios_per_sec": 0, 00:08:46.589 "rw_mbytes_per_sec": 0, 00:08:46.589 "r_mbytes_per_sec": 0, 00:08:46.589 "w_mbytes_per_sec": 0 00:08:46.589 }, 00:08:46.589 "claimed": true, 00:08:46.589 "claim_type": "exclusive_write", 00:08:46.589 "zoned": false, 00:08:46.589 "supported_io_types": { 00:08:46.589 "read": true, 00:08:46.589 "write": true, 00:08:46.589 "unmap": true, 00:08:46.589 "flush": true, 00:08:46.589 "reset": true, 00:08:46.589 "nvme_admin": false, 00:08:46.589 "nvme_io": false, 00:08:46.589 "nvme_io_md": false, 00:08:46.589 "write_zeroes": true, 00:08:46.589 "zcopy": true, 00:08:46.589 "get_zone_info": false, 00:08:46.589 "zone_management": false, 00:08:46.589 "zone_append": false, 00:08:46.589 "compare": false, 00:08:46.589 "compare_and_write": false, 00:08:46.589 "abort": true, 00:08:46.589 "seek_hole": false, 00:08:46.589 "seek_data": false, 00:08:46.589 "copy": true, 00:08:46.589 "nvme_iov_md": false 00:08:46.589 }, 00:08:46.589 "memory_domains": [ 00:08:46.589 { 00:08:46.589 "dma_device_id": "system", 00:08:46.589 "dma_device_type": 1 00:08:46.589 }, 00:08:46.589 { 00:08:46.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.589 "dma_device_type": 2 00:08:46.589 } 00:08:46.589 ], 00:08:46.589 "driver_specific": {} 00:08:46.589 } 00:08:46.589 ] 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.589 "name": "Existed_Raid", 00:08:46.589 "uuid": "9beddd30-a0ac-4187-9f53-4ee244864939", 00:08:46.589 "strip_size_kb": 64, 00:08:46.589 "state": "online", 00:08:46.589 "raid_level": "concat", 00:08:46.589 "superblock": false, 00:08:46.589 "num_base_bdevs": 3, 00:08:46.589 "num_base_bdevs_discovered": 3, 00:08:46.589 "num_base_bdevs_operational": 3, 00:08:46.589 "base_bdevs_list": [ 00:08:46.589 { 00:08:46.589 "name": "NewBaseBdev", 00:08:46.589 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:46.589 "is_configured": true, 00:08:46.589 "data_offset": 0, 00:08:46.589 "data_size": 65536 00:08:46.589 }, 00:08:46.589 { 00:08:46.589 "name": "BaseBdev2", 00:08:46.589 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:46.589 "is_configured": true, 00:08:46.589 "data_offset": 0, 00:08:46.589 "data_size": 65536 00:08:46.589 }, 00:08:46.589 { 00:08:46.589 "name": "BaseBdev3", 00:08:46.589 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:46.589 "is_configured": true, 00:08:46.589 "data_offset": 0, 00:08:46.589 "data_size": 65536 00:08:46.589 } 00:08:46.589 ] 00:08:46.589 }' 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.589 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.160 [2024-11-18 10:37:12.793181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.160 "name": "Existed_Raid", 00:08:47.160 "aliases": [ 00:08:47.160 "9beddd30-a0ac-4187-9f53-4ee244864939" 00:08:47.160 ], 00:08:47.160 "product_name": "Raid Volume", 00:08:47.160 "block_size": 512, 00:08:47.160 "num_blocks": 196608, 00:08:47.160 "uuid": "9beddd30-a0ac-4187-9f53-4ee244864939", 00:08:47.160 "assigned_rate_limits": { 00:08:47.160 "rw_ios_per_sec": 0, 00:08:47.160 "rw_mbytes_per_sec": 0, 00:08:47.160 "r_mbytes_per_sec": 0, 00:08:47.160 "w_mbytes_per_sec": 0 00:08:47.160 }, 00:08:47.160 "claimed": false, 00:08:47.160 "zoned": false, 00:08:47.160 "supported_io_types": { 00:08:47.160 "read": true, 00:08:47.160 "write": true, 00:08:47.160 "unmap": true, 00:08:47.160 "flush": true, 00:08:47.160 "reset": true, 00:08:47.160 "nvme_admin": false, 00:08:47.160 "nvme_io": false, 00:08:47.160 "nvme_io_md": false, 00:08:47.160 "write_zeroes": true, 00:08:47.160 "zcopy": false, 00:08:47.160 "get_zone_info": false, 00:08:47.160 "zone_management": false, 00:08:47.160 "zone_append": false, 00:08:47.160 "compare": false, 00:08:47.160 "compare_and_write": false, 00:08:47.160 "abort": false, 00:08:47.160 "seek_hole": false, 00:08:47.160 "seek_data": false, 00:08:47.160 "copy": false, 00:08:47.160 "nvme_iov_md": false 00:08:47.160 }, 00:08:47.160 "memory_domains": [ 00:08:47.160 { 00:08:47.160 "dma_device_id": "system", 00:08:47.160 "dma_device_type": 1 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.160 "dma_device_type": 2 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "dma_device_id": "system", 00:08:47.160 "dma_device_type": 1 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.160 "dma_device_type": 2 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "dma_device_id": "system", 00:08:47.160 "dma_device_type": 1 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.160 "dma_device_type": 2 00:08:47.160 } 00:08:47.160 ], 00:08:47.160 "driver_specific": { 00:08:47.160 "raid": { 00:08:47.160 "uuid": "9beddd30-a0ac-4187-9f53-4ee244864939", 00:08:47.160 "strip_size_kb": 64, 00:08:47.160 "state": "online", 00:08:47.160 "raid_level": "concat", 00:08:47.160 "superblock": false, 00:08:47.160 "num_base_bdevs": 3, 00:08:47.160 "num_base_bdevs_discovered": 3, 00:08:47.160 "num_base_bdevs_operational": 3, 00:08:47.160 "base_bdevs_list": [ 00:08:47.160 { 00:08:47.160 "name": "NewBaseBdev", 00:08:47.160 "uuid": "6578838f-daf2-4c54-9da2-eed636c350bb", 00:08:47.160 "is_configured": true, 00:08:47.160 "data_offset": 0, 00:08:47.160 "data_size": 65536 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "name": "BaseBdev2", 00:08:47.160 "uuid": "9b23a81d-7b0c-40cf-ba00-8ae66b847792", 00:08:47.160 "is_configured": true, 00:08:47.160 "data_offset": 0, 00:08:47.160 "data_size": 65536 00:08:47.160 }, 00:08:47.160 { 00:08:47.160 "name": "BaseBdev3", 00:08:47.160 "uuid": "938a5f1c-a722-4b74-85ce-fd2fdeecdbfa", 00:08:47.160 "is_configured": true, 00:08:47.160 "data_offset": 0, 00:08:47.160 "data_size": 65536 00:08:47.160 } 00:08:47.160 ] 00:08:47.160 } 00:08:47.160 } 00:08:47.160 }' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:47.160 BaseBdev2 00:08:47.160 BaseBdev3' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.160 10:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.160 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.420 [2024-11-18 10:37:13.048450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.420 [2024-11-18 10:37:13.048476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.420 [2024-11-18 10:37:13.048552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.420 [2024-11-18 10:37:13.048608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.420 [2024-11-18 10:37:13.048620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65489 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65489 ']' 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65489 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65489 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65489' 00:08:47.420 killing process with pid 65489 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65489 00:08:47.420 [2024-11-18 10:37:13.098195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.420 10:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65489 00:08:47.680 [2024-11-18 10:37:13.412013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.061 00:08:49.061 real 0m10.314s 00:08:49.061 user 0m16.067s 00:08:49.061 sys 0m1.994s 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.061 ************************************ 00:08:49.061 END TEST raid_state_function_test 00:08:49.061 ************************************ 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.061 10:37:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:49.061 10:37:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.061 10:37:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.061 10:37:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.061 ************************************ 00:08:49.061 START TEST raid_state_function_test_sb 00:08:49.061 ************************************ 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66110 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:49.061 Process raid pid: 66110 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66110' 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66110 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66110 ']' 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.061 10:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.061 [2024-11-18 10:37:14.731566] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:49.061 [2024-11-18 10:37:14.731760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.061 [2024-11-18 10:37:14.905743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.321 [2024-11-18 10:37:15.037952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.581 [2024-11-18 10:37:15.271869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.581 [2024-11-18 10:37:15.271986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.840 [2024-11-18 10:37:15.563016] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.840 [2024-11-18 10:37:15.563077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.840 [2024-11-18 10:37:15.563089] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.840 [2024-11-18 10:37:15.563099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.840 [2024-11-18 10:37:15.563105] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.840 [2024-11-18 10:37:15.563114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.840 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.840 "name": "Existed_Raid", 00:08:49.840 "uuid": "38aaba85-d933-45a5-9215-2af2ad4bc20f", 00:08:49.840 "strip_size_kb": 64, 00:08:49.840 "state": "configuring", 00:08:49.840 "raid_level": "concat", 00:08:49.840 "superblock": true, 00:08:49.840 "num_base_bdevs": 3, 00:08:49.840 "num_base_bdevs_discovered": 0, 00:08:49.840 "num_base_bdevs_operational": 3, 00:08:49.840 "base_bdevs_list": [ 00:08:49.840 { 00:08:49.840 "name": "BaseBdev1", 00:08:49.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.840 "is_configured": false, 00:08:49.840 "data_offset": 0, 00:08:49.840 "data_size": 0 00:08:49.840 }, 00:08:49.840 { 00:08:49.840 "name": "BaseBdev2", 00:08:49.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.840 "is_configured": false, 00:08:49.840 "data_offset": 0, 00:08:49.840 "data_size": 0 00:08:49.840 }, 00:08:49.841 { 00:08:49.841 "name": "BaseBdev3", 00:08:49.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.841 "is_configured": false, 00:08:49.841 "data_offset": 0, 00:08:49.841 "data_size": 0 00:08:49.841 } 00:08:49.841 ] 00:08:49.841 }' 00:08:49.841 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.841 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.100 [2024-11-18 10:37:15.974297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.100 [2024-11-18 10:37:15.974372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.100 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 [2024-11-18 10:37:15.986291] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.359 [2024-11-18 10:37:15.986370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.359 [2024-11-18 10:37:15.986397] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.359 [2024-11-18 10:37:15.986419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.359 [2024-11-18 10:37:15.986437] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.359 [2024-11-18 10:37:15.986458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.359 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.359 10:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.359 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.359 10:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 [2024-11-18 10:37:16.038775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.359 BaseBdev1 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.359 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 [ 00:08:50.359 { 00:08:50.359 "name": "BaseBdev1", 00:08:50.359 "aliases": [ 00:08:50.359 "858297fd-b7f4-43ba-bf9e-df0e992cc6b7" 00:08:50.359 ], 00:08:50.359 "product_name": "Malloc disk", 00:08:50.359 "block_size": 512, 00:08:50.359 "num_blocks": 65536, 00:08:50.359 "uuid": "858297fd-b7f4-43ba-bf9e-df0e992cc6b7", 00:08:50.359 "assigned_rate_limits": { 00:08:50.359 "rw_ios_per_sec": 0, 00:08:50.359 "rw_mbytes_per_sec": 0, 00:08:50.359 "r_mbytes_per_sec": 0, 00:08:50.359 "w_mbytes_per_sec": 0 00:08:50.359 }, 00:08:50.359 "claimed": true, 00:08:50.359 "claim_type": "exclusive_write", 00:08:50.359 "zoned": false, 00:08:50.359 "supported_io_types": { 00:08:50.359 "read": true, 00:08:50.359 "write": true, 00:08:50.359 "unmap": true, 00:08:50.359 "flush": true, 00:08:50.359 "reset": true, 00:08:50.359 "nvme_admin": false, 00:08:50.359 "nvme_io": false, 00:08:50.359 "nvme_io_md": false, 00:08:50.359 "write_zeroes": true, 00:08:50.359 "zcopy": true, 00:08:50.359 "get_zone_info": false, 00:08:50.360 "zone_management": false, 00:08:50.360 "zone_append": false, 00:08:50.360 "compare": false, 00:08:50.360 "compare_and_write": false, 00:08:50.360 "abort": true, 00:08:50.360 "seek_hole": false, 00:08:50.360 "seek_data": false, 00:08:50.360 "copy": true, 00:08:50.360 "nvme_iov_md": false 00:08:50.360 }, 00:08:50.360 "memory_domains": [ 00:08:50.360 { 00:08:50.360 "dma_device_id": "system", 00:08:50.360 "dma_device_type": 1 00:08:50.360 }, 00:08:50.360 { 00:08:50.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.360 "dma_device_type": 2 00:08:50.360 } 00:08:50.360 ], 00:08:50.360 "driver_specific": {} 00:08:50.360 } 00:08:50.360 ] 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.360 "name": "Existed_Raid", 00:08:50.360 "uuid": "54c18f39-3b48-4994-8f44-1d7368b34f95", 00:08:50.360 "strip_size_kb": 64, 00:08:50.360 "state": "configuring", 00:08:50.360 "raid_level": "concat", 00:08:50.360 "superblock": true, 00:08:50.360 "num_base_bdevs": 3, 00:08:50.360 "num_base_bdevs_discovered": 1, 00:08:50.360 "num_base_bdevs_operational": 3, 00:08:50.360 "base_bdevs_list": [ 00:08:50.360 { 00:08:50.360 "name": "BaseBdev1", 00:08:50.360 "uuid": "858297fd-b7f4-43ba-bf9e-df0e992cc6b7", 00:08:50.360 "is_configured": true, 00:08:50.360 "data_offset": 2048, 00:08:50.360 "data_size": 63488 00:08:50.360 }, 00:08:50.360 { 00:08:50.360 "name": "BaseBdev2", 00:08:50.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.360 "is_configured": false, 00:08:50.360 "data_offset": 0, 00:08:50.360 "data_size": 0 00:08:50.360 }, 00:08:50.360 { 00:08:50.360 "name": "BaseBdev3", 00:08:50.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.360 "is_configured": false, 00:08:50.360 "data_offset": 0, 00:08:50.360 "data_size": 0 00:08:50.360 } 00:08:50.360 ] 00:08:50.360 }' 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.360 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.619 [2024-11-18 10:37:16.486062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.619 [2024-11-18 10:37:16.486189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.619 [2024-11-18 10:37:16.494101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.619 [2024-11-18 10:37:16.496289] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.619 [2024-11-18 10:37:16.496374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.619 [2024-11-18 10:37:16.496388] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.619 [2024-11-18 10:37:16.496398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.619 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.878 "name": "Existed_Raid", 00:08:50.878 "uuid": "def62d48-aa38-41e5-aef6-283d6a925280", 00:08:50.878 "strip_size_kb": 64, 00:08:50.878 "state": "configuring", 00:08:50.878 "raid_level": "concat", 00:08:50.878 "superblock": true, 00:08:50.878 "num_base_bdevs": 3, 00:08:50.878 "num_base_bdevs_discovered": 1, 00:08:50.878 "num_base_bdevs_operational": 3, 00:08:50.878 "base_bdevs_list": [ 00:08:50.878 { 00:08:50.878 "name": "BaseBdev1", 00:08:50.878 "uuid": "858297fd-b7f4-43ba-bf9e-df0e992cc6b7", 00:08:50.878 "is_configured": true, 00:08:50.878 "data_offset": 2048, 00:08:50.878 "data_size": 63488 00:08:50.878 }, 00:08:50.878 { 00:08:50.878 "name": "BaseBdev2", 00:08:50.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.878 "is_configured": false, 00:08:50.878 "data_offset": 0, 00:08:50.878 "data_size": 0 00:08:50.878 }, 00:08:50.878 { 00:08:50.878 "name": "BaseBdev3", 00:08:50.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.878 "is_configured": false, 00:08:50.878 "data_offset": 0, 00:08:50.878 "data_size": 0 00:08:50.878 } 00:08:50.878 ] 00:08:50.878 }' 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.878 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 [2024-11-18 10:37:16.943500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.138 BaseBdev2 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 [ 00:08:51.138 { 00:08:51.138 "name": "BaseBdev2", 00:08:51.138 "aliases": [ 00:08:51.138 "9f07ae0a-bdaa-42dc-b9f5-494110454a13" 00:08:51.138 ], 00:08:51.138 "product_name": "Malloc disk", 00:08:51.138 "block_size": 512, 00:08:51.138 "num_blocks": 65536, 00:08:51.138 "uuid": "9f07ae0a-bdaa-42dc-b9f5-494110454a13", 00:08:51.138 "assigned_rate_limits": { 00:08:51.138 "rw_ios_per_sec": 0, 00:08:51.138 "rw_mbytes_per_sec": 0, 00:08:51.138 "r_mbytes_per_sec": 0, 00:08:51.138 "w_mbytes_per_sec": 0 00:08:51.138 }, 00:08:51.138 "claimed": true, 00:08:51.138 "claim_type": "exclusive_write", 00:08:51.138 "zoned": false, 00:08:51.138 "supported_io_types": { 00:08:51.138 "read": true, 00:08:51.138 "write": true, 00:08:51.138 "unmap": true, 00:08:51.138 "flush": true, 00:08:51.138 "reset": true, 00:08:51.138 "nvme_admin": false, 00:08:51.138 "nvme_io": false, 00:08:51.138 "nvme_io_md": false, 00:08:51.138 "write_zeroes": true, 00:08:51.138 "zcopy": true, 00:08:51.138 "get_zone_info": false, 00:08:51.138 "zone_management": false, 00:08:51.138 "zone_append": false, 00:08:51.138 "compare": false, 00:08:51.138 "compare_and_write": false, 00:08:51.138 "abort": true, 00:08:51.138 "seek_hole": false, 00:08:51.138 "seek_data": false, 00:08:51.138 "copy": true, 00:08:51.138 "nvme_iov_md": false 00:08:51.138 }, 00:08:51.138 "memory_domains": [ 00:08:51.138 { 00:08:51.138 "dma_device_id": "system", 00:08:51.138 "dma_device_type": 1 00:08:51.138 }, 00:08:51.138 { 00:08:51.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.138 "dma_device_type": 2 00:08:51.138 } 00:08:51.138 ], 00:08:51.138 "driver_specific": {} 00:08:51.138 } 00:08:51.138 ] 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.138 10:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.397 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.397 "name": "Existed_Raid", 00:08:51.397 "uuid": "def62d48-aa38-41e5-aef6-283d6a925280", 00:08:51.397 "strip_size_kb": 64, 00:08:51.397 "state": "configuring", 00:08:51.397 "raid_level": "concat", 00:08:51.397 "superblock": true, 00:08:51.397 "num_base_bdevs": 3, 00:08:51.397 "num_base_bdevs_discovered": 2, 00:08:51.397 "num_base_bdevs_operational": 3, 00:08:51.397 "base_bdevs_list": [ 00:08:51.397 { 00:08:51.397 "name": "BaseBdev1", 00:08:51.397 "uuid": "858297fd-b7f4-43ba-bf9e-df0e992cc6b7", 00:08:51.397 "is_configured": true, 00:08:51.397 "data_offset": 2048, 00:08:51.397 "data_size": 63488 00:08:51.397 }, 00:08:51.397 { 00:08:51.397 "name": "BaseBdev2", 00:08:51.397 "uuid": "9f07ae0a-bdaa-42dc-b9f5-494110454a13", 00:08:51.397 "is_configured": true, 00:08:51.397 "data_offset": 2048, 00:08:51.397 "data_size": 63488 00:08:51.397 }, 00:08:51.397 { 00:08:51.397 "name": "BaseBdev3", 00:08:51.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.397 "is_configured": false, 00:08:51.397 "data_offset": 0, 00:08:51.397 "data_size": 0 00:08:51.397 } 00:08:51.397 ] 00:08:51.397 }' 00:08:51.397 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.397 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 [2024-11-18 10:37:17.450673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.657 [2024-11-18 10:37:17.451036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.657 BaseBdev3 00:08:51.657 [2024-11-18 10:37:17.451124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.657 [2024-11-18 10:37:17.451483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:51.657 [2024-11-18 10:37:17.451659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.657 [2024-11-18 10:37:17.451670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:51.657 [2024-11-18 10:37:17.451831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 [ 00:08:51.657 { 00:08:51.657 "name": "BaseBdev3", 00:08:51.657 "aliases": [ 00:08:51.657 "cf13100b-46ba-43d0-84ad-8fc547fb04c6" 00:08:51.657 ], 00:08:51.657 "product_name": "Malloc disk", 00:08:51.657 "block_size": 512, 00:08:51.657 "num_blocks": 65536, 00:08:51.657 "uuid": "cf13100b-46ba-43d0-84ad-8fc547fb04c6", 00:08:51.657 "assigned_rate_limits": { 00:08:51.657 "rw_ios_per_sec": 0, 00:08:51.657 "rw_mbytes_per_sec": 0, 00:08:51.657 "r_mbytes_per_sec": 0, 00:08:51.657 "w_mbytes_per_sec": 0 00:08:51.657 }, 00:08:51.657 "claimed": true, 00:08:51.657 "claim_type": "exclusive_write", 00:08:51.657 "zoned": false, 00:08:51.657 "supported_io_types": { 00:08:51.657 "read": true, 00:08:51.657 "write": true, 00:08:51.657 "unmap": true, 00:08:51.657 "flush": true, 00:08:51.657 "reset": true, 00:08:51.657 "nvme_admin": false, 00:08:51.657 "nvme_io": false, 00:08:51.657 "nvme_io_md": false, 00:08:51.657 "write_zeroes": true, 00:08:51.657 "zcopy": true, 00:08:51.657 "get_zone_info": false, 00:08:51.657 "zone_management": false, 00:08:51.657 "zone_append": false, 00:08:51.657 "compare": false, 00:08:51.657 "compare_and_write": false, 00:08:51.657 "abort": true, 00:08:51.657 "seek_hole": false, 00:08:51.657 "seek_data": false, 00:08:51.657 "copy": true, 00:08:51.657 "nvme_iov_md": false 00:08:51.657 }, 00:08:51.657 "memory_domains": [ 00:08:51.657 { 00:08:51.657 "dma_device_id": "system", 00:08:51.657 "dma_device_type": 1 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.657 "dma_device_type": 2 00:08:51.657 } 00:08:51.657 ], 00:08:51.657 "driver_specific": {} 00:08:51.657 } 00:08:51.657 ] 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.916 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.916 "name": "Existed_Raid", 00:08:51.916 "uuid": "def62d48-aa38-41e5-aef6-283d6a925280", 00:08:51.916 "strip_size_kb": 64, 00:08:51.916 "state": "online", 00:08:51.916 "raid_level": "concat", 00:08:51.916 "superblock": true, 00:08:51.916 "num_base_bdevs": 3, 00:08:51.916 "num_base_bdevs_discovered": 3, 00:08:51.916 "num_base_bdevs_operational": 3, 00:08:51.916 "base_bdevs_list": [ 00:08:51.916 { 00:08:51.916 "name": "BaseBdev1", 00:08:51.916 "uuid": "858297fd-b7f4-43ba-bf9e-df0e992cc6b7", 00:08:51.916 "is_configured": true, 00:08:51.916 "data_offset": 2048, 00:08:51.916 "data_size": 63488 00:08:51.916 }, 00:08:51.916 { 00:08:51.916 "name": "BaseBdev2", 00:08:51.916 "uuid": "9f07ae0a-bdaa-42dc-b9f5-494110454a13", 00:08:51.916 "is_configured": true, 00:08:51.916 "data_offset": 2048, 00:08:51.916 "data_size": 63488 00:08:51.916 }, 00:08:51.916 { 00:08:51.916 "name": "BaseBdev3", 00:08:51.916 "uuid": "cf13100b-46ba-43d0-84ad-8fc547fb04c6", 00:08:51.916 "is_configured": true, 00:08:51.916 "data_offset": 2048, 00:08:51.916 "data_size": 63488 00:08:51.916 } 00:08:51.916 ] 00:08:51.916 }' 00:08:51.916 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.916 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.176 [2024-11-18 10:37:17.954085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.176 "name": "Existed_Raid", 00:08:52.176 "aliases": [ 00:08:52.176 "def62d48-aa38-41e5-aef6-283d6a925280" 00:08:52.176 ], 00:08:52.176 "product_name": "Raid Volume", 00:08:52.176 "block_size": 512, 00:08:52.176 "num_blocks": 190464, 00:08:52.176 "uuid": "def62d48-aa38-41e5-aef6-283d6a925280", 00:08:52.176 "assigned_rate_limits": { 00:08:52.176 "rw_ios_per_sec": 0, 00:08:52.176 "rw_mbytes_per_sec": 0, 00:08:52.176 "r_mbytes_per_sec": 0, 00:08:52.176 "w_mbytes_per_sec": 0 00:08:52.176 }, 00:08:52.176 "claimed": false, 00:08:52.176 "zoned": false, 00:08:52.176 "supported_io_types": { 00:08:52.176 "read": true, 00:08:52.176 "write": true, 00:08:52.176 "unmap": true, 00:08:52.176 "flush": true, 00:08:52.176 "reset": true, 00:08:52.176 "nvme_admin": false, 00:08:52.176 "nvme_io": false, 00:08:52.176 "nvme_io_md": false, 00:08:52.176 "write_zeroes": true, 00:08:52.176 "zcopy": false, 00:08:52.176 "get_zone_info": false, 00:08:52.176 "zone_management": false, 00:08:52.176 "zone_append": false, 00:08:52.176 "compare": false, 00:08:52.176 "compare_and_write": false, 00:08:52.176 "abort": false, 00:08:52.176 "seek_hole": false, 00:08:52.176 "seek_data": false, 00:08:52.176 "copy": false, 00:08:52.176 "nvme_iov_md": false 00:08:52.176 }, 00:08:52.176 "memory_domains": [ 00:08:52.176 { 00:08:52.176 "dma_device_id": "system", 00:08:52.176 "dma_device_type": 1 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.176 "dma_device_type": 2 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "dma_device_id": "system", 00:08:52.176 "dma_device_type": 1 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.176 "dma_device_type": 2 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "dma_device_id": "system", 00:08:52.176 "dma_device_type": 1 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.176 "dma_device_type": 2 00:08:52.176 } 00:08:52.176 ], 00:08:52.176 "driver_specific": { 00:08:52.176 "raid": { 00:08:52.176 "uuid": "def62d48-aa38-41e5-aef6-283d6a925280", 00:08:52.176 "strip_size_kb": 64, 00:08:52.176 "state": "online", 00:08:52.176 "raid_level": "concat", 00:08:52.176 "superblock": true, 00:08:52.176 "num_base_bdevs": 3, 00:08:52.176 "num_base_bdevs_discovered": 3, 00:08:52.176 "num_base_bdevs_operational": 3, 00:08:52.176 "base_bdevs_list": [ 00:08:52.176 { 00:08:52.176 "name": "BaseBdev1", 00:08:52.176 "uuid": "858297fd-b7f4-43ba-bf9e-df0e992cc6b7", 00:08:52.176 "is_configured": true, 00:08:52.176 "data_offset": 2048, 00:08:52.176 "data_size": 63488 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "name": "BaseBdev2", 00:08:52.176 "uuid": "9f07ae0a-bdaa-42dc-b9f5-494110454a13", 00:08:52.176 "is_configured": true, 00:08:52.176 "data_offset": 2048, 00:08:52.176 "data_size": 63488 00:08:52.176 }, 00:08:52.176 { 00:08:52.176 "name": "BaseBdev3", 00:08:52.176 "uuid": "cf13100b-46ba-43d0-84ad-8fc547fb04c6", 00:08:52.176 "is_configured": true, 00:08:52.176 "data_offset": 2048, 00:08:52.176 "data_size": 63488 00:08:52.176 } 00:08:52.176 ] 00:08:52.176 } 00:08:52.176 } 00:08:52.176 }' 00:08:52.176 10:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.176 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:52.176 BaseBdev2 00:08:52.176 BaseBdev3' 00:08:52.176 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.436 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.437 [2024-11-18 10:37:18.213431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.437 [2024-11-18 10:37:18.213454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.437 [2024-11-18 10:37:18.213502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.437 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.696 "name": "Existed_Raid", 00:08:52.696 "uuid": "def62d48-aa38-41e5-aef6-283d6a925280", 00:08:52.696 "strip_size_kb": 64, 00:08:52.696 "state": "offline", 00:08:52.696 "raid_level": "concat", 00:08:52.696 "superblock": true, 00:08:52.696 "num_base_bdevs": 3, 00:08:52.696 "num_base_bdevs_discovered": 2, 00:08:52.696 "num_base_bdevs_operational": 2, 00:08:52.696 "base_bdevs_list": [ 00:08:52.696 { 00:08:52.696 "name": null, 00:08:52.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.696 "is_configured": false, 00:08:52.696 "data_offset": 0, 00:08:52.696 "data_size": 63488 00:08:52.696 }, 00:08:52.696 { 00:08:52.696 "name": "BaseBdev2", 00:08:52.696 "uuid": "9f07ae0a-bdaa-42dc-b9f5-494110454a13", 00:08:52.696 "is_configured": true, 00:08:52.696 "data_offset": 2048, 00:08:52.696 "data_size": 63488 00:08:52.696 }, 00:08:52.696 { 00:08:52.696 "name": "BaseBdev3", 00:08:52.696 "uuid": "cf13100b-46ba-43d0-84ad-8fc547fb04c6", 00:08:52.696 "is_configured": true, 00:08:52.696 "data_offset": 2048, 00:08:52.696 "data_size": 63488 00:08:52.696 } 00:08:52.696 ] 00:08:52.696 }' 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.696 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.956 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.956 [2024-11-18 10:37:18.824050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.215 10:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.216 [2024-11-18 10:37:18.975162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.216 [2024-11-18 10:37:18.975235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.216 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 BaseBdev2 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 [ 00:08:53.486 { 00:08:53.486 "name": "BaseBdev2", 00:08:53.486 "aliases": [ 00:08:53.486 "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a" 00:08:53.486 ], 00:08:53.486 "product_name": "Malloc disk", 00:08:53.486 "block_size": 512, 00:08:53.486 "num_blocks": 65536, 00:08:53.486 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:53.486 "assigned_rate_limits": { 00:08:53.486 "rw_ios_per_sec": 0, 00:08:53.486 "rw_mbytes_per_sec": 0, 00:08:53.486 "r_mbytes_per_sec": 0, 00:08:53.486 "w_mbytes_per_sec": 0 00:08:53.486 }, 00:08:53.486 "claimed": false, 00:08:53.486 "zoned": false, 00:08:53.486 "supported_io_types": { 00:08:53.486 "read": true, 00:08:53.486 "write": true, 00:08:53.486 "unmap": true, 00:08:53.486 "flush": true, 00:08:53.486 "reset": true, 00:08:53.486 "nvme_admin": false, 00:08:53.486 "nvme_io": false, 00:08:53.486 "nvme_io_md": false, 00:08:53.486 "write_zeroes": true, 00:08:53.486 "zcopy": true, 00:08:53.486 "get_zone_info": false, 00:08:53.486 "zone_management": false, 00:08:53.486 "zone_append": false, 00:08:53.486 "compare": false, 00:08:53.486 "compare_and_write": false, 00:08:53.486 "abort": true, 00:08:53.486 "seek_hole": false, 00:08:53.486 "seek_data": false, 00:08:53.486 "copy": true, 00:08:53.486 "nvme_iov_md": false 00:08:53.486 }, 00:08:53.486 "memory_domains": [ 00:08:53.486 { 00:08:53.486 "dma_device_id": "system", 00:08:53.486 "dma_device_type": 1 00:08:53.486 }, 00:08:53.486 { 00:08:53.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.486 "dma_device_type": 2 00:08:53.486 } 00:08:53.486 ], 00:08:53.486 "driver_specific": {} 00:08:53.486 } 00:08:53.486 ] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 BaseBdev3 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 [ 00:08:53.486 { 00:08:53.486 "name": "BaseBdev3", 00:08:53.486 "aliases": [ 00:08:53.486 "f5b0aecd-df35-4710-86b4-3133338e458c" 00:08:53.486 ], 00:08:53.486 "product_name": "Malloc disk", 00:08:53.486 "block_size": 512, 00:08:53.486 "num_blocks": 65536, 00:08:53.486 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:53.486 "assigned_rate_limits": { 00:08:53.486 "rw_ios_per_sec": 0, 00:08:53.486 "rw_mbytes_per_sec": 0, 00:08:53.486 "r_mbytes_per_sec": 0, 00:08:53.486 "w_mbytes_per_sec": 0 00:08:53.486 }, 00:08:53.486 "claimed": false, 00:08:53.486 "zoned": false, 00:08:53.486 "supported_io_types": { 00:08:53.486 "read": true, 00:08:53.486 "write": true, 00:08:53.486 "unmap": true, 00:08:53.486 "flush": true, 00:08:53.486 "reset": true, 00:08:53.486 "nvme_admin": false, 00:08:53.486 "nvme_io": false, 00:08:53.486 "nvme_io_md": false, 00:08:53.487 "write_zeroes": true, 00:08:53.487 "zcopy": true, 00:08:53.487 "get_zone_info": false, 00:08:53.487 "zone_management": false, 00:08:53.487 "zone_append": false, 00:08:53.487 "compare": false, 00:08:53.487 "compare_and_write": false, 00:08:53.487 "abort": true, 00:08:53.487 "seek_hole": false, 00:08:53.487 "seek_data": false, 00:08:53.487 "copy": true, 00:08:53.487 "nvme_iov_md": false 00:08:53.487 }, 00:08:53.487 "memory_domains": [ 00:08:53.487 { 00:08:53.487 "dma_device_id": "system", 00:08:53.487 "dma_device_type": 1 00:08:53.487 }, 00:08:53.487 { 00:08:53.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.487 "dma_device_type": 2 00:08:53.487 } 00:08:53.487 ], 00:08:53.487 "driver_specific": {} 00:08:53.487 } 00:08:53.487 ] 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.487 [2024-11-18 10:37:19.298378] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.487 [2024-11-18 10:37:19.298460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.487 [2024-11-18 10:37:19.298507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.487 [2024-11-18 10:37:19.300537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.487 "name": "Existed_Raid", 00:08:53.487 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:53.487 "strip_size_kb": 64, 00:08:53.487 "state": "configuring", 00:08:53.487 "raid_level": "concat", 00:08:53.487 "superblock": true, 00:08:53.487 "num_base_bdevs": 3, 00:08:53.487 "num_base_bdevs_discovered": 2, 00:08:53.487 "num_base_bdevs_operational": 3, 00:08:53.487 "base_bdevs_list": [ 00:08:53.487 { 00:08:53.487 "name": "BaseBdev1", 00:08:53.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.487 "is_configured": false, 00:08:53.487 "data_offset": 0, 00:08:53.487 "data_size": 0 00:08:53.487 }, 00:08:53.487 { 00:08:53.487 "name": "BaseBdev2", 00:08:53.487 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:53.487 "is_configured": true, 00:08:53.487 "data_offset": 2048, 00:08:53.487 "data_size": 63488 00:08:53.487 }, 00:08:53.487 { 00:08:53.487 "name": "BaseBdev3", 00:08:53.487 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:53.487 "is_configured": true, 00:08:53.487 "data_offset": 2048, 00:08:53.487 "data_size": 63488 00:08:53.487 } 00:08:53.487 ] 00:08:53.487 }' 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.487 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.060 [2024-11-18 10:37:19.725582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.060 "name": "Existed_Raid", 00:08:54.060 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:54.060 "strip_size_kb": 64, 00:08:54.060 "state": "configuring", 00:08:54.060 "raid_level": "concat", 00:08:54.060 "superblock": true, 00:08:54.060 "num_base_bdevs": 3, 00:08:54.060 "num_base_bdevs_discovered": 1, 00:08:54.060 "num_base_bdevs_operational": 3, 00:08:54.060 "base_bdevs_list": [ 00:08:54.060 { 00:08:54.060 "name": "BaseBdev1", 00:08:54.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.060 "is_configured": false, 00:08:54.060 "data_offset": 0, 00:08:54.060 "data_size": 0 00:08:54.060 }, 00:08:54.060 { 00:08:54.060 "name": null, 00:08:54.060 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:54.060 "is_configured": false, 00:08:54.060 "data_offset": 0, 00:08:54.060 "data_size": 63488 00:08:54.060 }, 00:08:54.060 { 00:08:54.060 "name": "BaseBdev3", 00:08:54.060 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:54.060 "is_configured": true, 00:08:54.060 "data_offset": 2048, 00:08:54.060 "data_size": 63488 00:08:54.060 } 00:08:54.060 ] 00:08:54.060 }' 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.060 10:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.319 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.319 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.319 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.319 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.319 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.579 [2024-11-18 10:37:20.258484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.579 BaseBdev1 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.579 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.580 [ 00:08:54.580 { 00:08:54.580 "name": "BaseBdev1", 00:08:54.580 "aliases": [ 00:08:54.580 "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5" 00:08:54.580 ], 00:08:54.580 "product_name": "Malloc disk", 00:08:54.580 "block_size": 512, 00:08:54.580 "num_blocks": 65536, 00:08:54.580 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:54.580 "assigned_rate_limits": { 00:08:54.580 "rw_ios_per_sec": 0, 00:08:54.580 "rw_mbytes_per_sec": 0, 00:08:54.580 "r_mbytes_per_sec": 0, 00:08:54.580 "w_mbytes_per_sec": 0 00:08:54.580 }, 00:08:54.580 "claimed": true, 00:08:54.580 "claim_type": "exclusive_write", 00:08:54.580 "zoned": false, 00:08:54.580 "supported_io_types": { 00:08:54.580 "read": true, 00:08:54.580 "write": true, 00:08:54.580 "unmap": true, 00:08:54.580 "flush": true, 00:08:54.580 "reset": true, 00:08:54.580 "nvme_admin": false, 00:08:54.580 "nvme_io": false, 00:08:54.580 "nvme_io_md": false, 00:08:54.580 "write_zeroes": true, 00:08:54.580 "zcopy": true, 00:08:54.580 "get_zone_info": false, 00:08:54.580 "zone_management": false, 00:08:54.580 "zone_append": false, 00:08:54.580 "compare": false, 00:08:54.580 "compare_and_write": false, 00:08:54.580 "abort": true, 00:08:54.580 "seek_hole": false, 00:08:54.580 "seek_data": false, 00:08:54.580 "copy": true, 00:08:54.580 "nvme_iov_md": false 00:08:54.580 }, 00:08:54.580 "memory_domains": [ 00:08:54.580 { 00:08:54.580 "dma_device_id": "system", 00:08:54.580 "dma_device_type": 1 00:08:54.580 }, 00:08:54.580 { 00:08:54.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.580 "dma_device_type": 2 00:08:54.580 } 00:08:54.580 ], 00:08:54.580 "driver_specific": {} 00:08:54.580 } 00:08:54.580 ] 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.580 "name": "Existed_Raid", 00:08:54.580 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:54.580 "strip_size_kb": 64, 00:08:54.580 "state": "configuring", 00:08:54.580 "raid_level": "concat", 00:08:54.580 "superblock": true, 00:08:54.580 "num_base_bdevs": 3, 00:08:54.580 "num_base_bdevs_discovered": 2, 00:08:54.580 "num_base_bdevs_operational": 3, 00:08:54.580 "base_bdevs_list": [ 00:08:54.580 { 00:08:54.580 "name": "BaseBdev1", 00:08:54.580 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:54.580 "is_configured": true, 00:08:54.580 "data_offset": 2048, 00:08:54.580 "data_size": 63488 00:08:54.580 }, 00:08:54.580 { 00:08:54.580 "name": null, 00:08:54.580 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:54.580 "is_configured": false, 00:08:54.580 "data_offset": 0, 00:08:54.580 "data_size": 63488 00:08:54.580 }, 00:08:54.580 { 00:08:54.580 "name": "BaseBdev3", 00:08:54.580 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:54.580 "is_configured": true, 00:08:54.580 "data_offset": 2048, 00:08:54.580 "data_size": 63488 00:08:54.580 } 00:08:54.580 ] 00:08:54.580 }' 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.580 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.840 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.840 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.840 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.840 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.100 [2024-11-18 10:37:20.745674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.100 "name": "Existed_Raid", 00:08:55.100 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:55.100 "strip_size_kb": 64, 00:08:55.100 "state": "configuring", 00:08:55.100 "raid_level": "concat", 00:08:55.100 "superblock": true, 00:08:55.100 "num_base_bdevs": 3, 00:08:55.100 "num_base_bdevs_discovered": 1, 00:08:55.100 "num_base_bdevs_operational": 3, 00:08:55.100 "base_bdevs_list": [ 00:08:55.100 { 00:08:55.100 "name": "BaseBdev1", 00:08:55.100 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:55.100 "is_configured": true, 00:08:55.100 "data_offset": 2048, 00:08:55.100 "data_size": 63488 00:08:55.100 }, 00:08:55.100 { 00:08:55.100 "name": null, 00:08:55.100 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:55.100 "is_configured": false, 00:08:55.100 "data_offset": 0, 00:08:55.100 "data_size": 63488 00:08:55.100 }, 00:08:55.100 { 00:08:55.100 "name": null, 00:08:55.100 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:55.100 "is_configured": false, 00:08:55.100 "data_offset": 0, 00:08:55.100 "data_size": 63488 00:08:55.100 } 00:08:55.100 ] 00:08:55.100 }' 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.100 10:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.360 [2024-11-18 10:37:21.216894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.360 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.361 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.621 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.621 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.621 "name": "Existed_Raid", 00:08:55.621 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:55.621 "strip_size_kb": 64, 00:08:55.621 "state": "configuring", 00:08:55.621 "raid_level": "concat", 00:08:55.621 "superblock": true, 00:08:55.621 "num_base_bdevs": 3, 00:08:55.621 "num_base_bdevs_discovered": 2, 00:08:55.621 "num_base_bdevs_operational": 3, 00:08:55.621 "base_bdevs_list": [ 00:08:55.621 { 00:08:55.621 "name": "BaseBdev1", 00:08:55.621 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:55.621 "is_configured": true, 00:08:55.621 "data_offset": 2048, 00:08:55.621 "data_size": 63488 00:08:55.621 }, 00:08:55.621 { 00:08:55.621 "name": null, 00:08:55.621 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:55.621 "is_configured": false, 00:08:55.621 "data_offset": 0, 00:08:55.621 "data_size": 63488 00:08:55.621 }, 00:08:55.621 { 00:08:55.621 "name": "BaseBdev3", 00:08:55.621 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:55.621 "is_configured": true, 00:08:55.621 "data_offset": 2048, 00:08:55.621 "data_size": 63488 00:08:55.621 } 00:08:55.621 ] 00:08:55.621 }' 00:08:55.621 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.621 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.881 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.881 [2024-11-18 10:37:21.676127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.150 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.151 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.151 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.151 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.151 "name": "Existed_Raid", 00:08:56.151 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:56.151 "strip_size_kb": 64, 00:08:56.151 "state": "configuring", 00:08:56.151 "raid_level": "concat", 00:08:56.151 "superblock": true, 00:08:56.151 "num_base_bdevs": 3, 00:08:56.151 "num_base_bdevs_discovered": 1, 00:08:56.151 "num_base_bdevs_operational": 3, 00:08:56.151 "base_bdevs_list": [ 00:08:56.151 { 00:08:56.151 "name": null, 00:08:56.151 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:56.151 "is_configured": false, 00:08:56.151 "data_offset": 0, 00:08:56.151 "data_size": 63488 00:08:56.151 }, 00:08:56.151 { 00:08:56.151 "name": null, 00:08:56.151 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:56.151 "is_configured": false, 00:08:56.151 "data_offset": 0, 00:08:56.151 "data_size": 63488 00:08:56.151 }, 00:08:56.151 { 00:08:56.151 "name": "BaseBdev3", 00:08:56.151 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:56.151 "is_configured": true, 00:08:56.151 "data_offset": 2048, 00:08:56.151 "data_size": 63488 00:08:56.151 } 00:08:56.151 ] 00:08:56.151 }' 00:08:56.151 10:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.151 10:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:56.421 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.422 [2024-11-18 10:37:22.238399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.422 "name": "Existed_Raid", 00:08:56.422 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:56.422 "strip_size_kb": 64, 00:08:56.422 "state": "configuring", 00:08:56.422 "raid_level": "concat", 00:08:56.422 "superblock": true, 00:08:56.422 "num_base_bdevs": 3, 00:08:56.422 "num_base_bdevs_discovered": 2, 00:08:56.422 "num_base_bdevs_operational": 3, 00:08:56.422 "base_bdevs_list": [ 00:08:56.422 { 00:08:56.422 "name": null, 00:08:56.422 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:56.422 "is_configured": false, 00:08:56.422 "data_offset": 0, 00:08:56.422 "data_size": 63488 00:08:56.422 }, 00:08:56.422 { 00:08:56.422 "name": "BaseBdev2", 00:08:56.422 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:56.422 "is_configured": true, 00:08:56.422 "data_offset": 2048, 00:08:56.422 "data_size": 63488 00:08:56.422 }, 00:08:56.422 { 00:08:56.422 "name": "BaseBdev3", 00:08:56.422 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:56.422 "is_configured": true, 00:08:56.422 "data_offset": 2048, 00:08:56.422 "data_size": 63488 00:08:56.422 } 00:08:56.422 ] 00:08:56.422 }' 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.422 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.992 [2024-11-18 10:37:22.846402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:56.992 [2024-11-18 10:37:22.846628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:56.992 [2024-11-18 10:37:22.846647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.992 [2024-11-18 10:37:22.846907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:56.992 [2024-11-18 10:37:22.847094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:56.992 [2024-11-18 10:37:22.847105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:56.992 [2024-11-18 10:37:22.847264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.992 NewBaseBdev 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.992 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.992 [ 00:08:56.992 { 00:08:56.992 "name": "NewBaseBdev", 00:08:56.992 "aliases": [ 00:08:56.992 "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5" 00:08:56.992 ], 00:08:56.992 "product_name": "Malloc disk", 00:08:56.992 "block_size": 512, 00:08:56.992 "num_blocks": 65536, 00:08:56.992 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:56.992 "assigned_rate_limits": { 00:08:56.992 "rw_ios_per_sec": 0, 00:08:56.992 "rw_mbytes_per_sec": 0, 00:08:56.992 "r_mbytes_per_sec": 0, 00:08:56.992 "w_mbytes_per_sec": 0 00:08:56.992 }, 00:08:56.992 "claimed": true, 00:08:56.992 "claim_type": "exclusive_write", 00:08:56.992 "zoned": false, 00:08:57.251 "supported_io_types": { 00:08:57.251 "read": true, 00:08:57.251 "write": true, 00:08:57.251 "unmap": true, 00:08:57.251 "flush": true, 00:08:57.251 "reset": true, 00:08:57.251 "nvme_admin": false, 00:08:57.251 "nvme_io": false, 00:08:57.251 "nvme_io_md": false, 00:08:57.251 "write_zeroes": true, 00:08:57.251 "zcopy": true, 00:08:57.251 "get_zone_info": false, 00:08:57.251 "zone_management": false, 00:08:57.251 "zone_append": false, 00:08:57.251 "compare": false, 00:08:57.251 "compare_and_write": false, 00:08:57.251 "abort": true, 00:08:57.251 "seek_hole": false, 00:08:57.251 "seek_data": false, 00:08:57.251 "copy": true, 00:08:57.251 "nvme_iov_md": false 00:08:57.251 }, 00:08:57.251 "memory_domains": [ 00:08:57.251 { 00:08:57.251 "dma_device_id": "system", 00:08:57.251 "dma_device_type": 1 00:08:57.251 }, 00:08:57.251 { 00:08:57.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.251 "dma_device_type": 2 00:08:57.251 } 00:08:57.251 ], 00:08:57.251 "driver_specific": {} 00:08:57.251 } 00:08:57.251 ] 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.251 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.252 "name": "Existed_Raid", 00:08:57.252 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:57.252 "strip_size_kb": 64, 00:08:57.252 "state": "online", 00:08:57.252 "raid_level": "concat", 00:08:57.252 "superblock": true, 00:08:57.252 "num_base_bdevs": 3, 00:08:57.252 "num_base_bdevs_discovered": 3, 00:08:57.252 "num_base_bdevs_operational": 3, 00:08:57.252 "base_bdevs_list": [ 00:08:57.252 { 00:08:57.252 "name": "NewBaseBdev", 00:08:57.252 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:57.252 "is_configured": true, 00:08:57.252 "data_offset": 2048, 00:08:57.252 "data_size": 63488 00:08:57.252 }, 00:08:57.252 { 00:08:57.252 "name": "BaseBdev2", 00:08:57.252 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:57.252 "is_configured": true, 00:08:57.252 "data_offset": 2048, 00:08:57.252 "data_size": 63488 00:08:57.252 }, 00:08:57.252 { 00:08:57.252 "name": "BaseBdev3", 00:08:57.252 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:57.252 "is_configured": true, 00:08:57.252 "data_offset": 2048, 00:08:57.252 "data_size": 63488 00:08:57.252 } 00:08:57.252 ] 00:08:57.252 }' 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.252 10:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.512 [2024-11-18 10:37:23.321868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.512 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.512 "name": "Existed_Raid", 00:08:57.512 "aliases": [ 00:08:57.512 "8a534ac5-6608-49a3-8038-98a4a1f3bdd8" 00:08:57.512 ], 00:08:57.512 "product_name": "Raid Volume", 00:08:57.512 "block_size": 512, 00:08:57.512 "num_blocks": 190464, 00:08:57.512 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:57.512 "assigned_rate_limits": { 00:08:57.512 "rw_ios_per_sec": 0, 00:08:57.512 "rw_mbytes_per_sec": 0, 00:08:57.512 "r_mbytes_per_sec": 0, 00:08:57.512 "w_mbytes_per_sec": 0 00:08:57.512 }, 00:08:57.512 "claimed": false, 00:08:57.512 "zoned": false, 00:08:57.512 "supported_io_types": { 00:08:57.512 "read": true, 00:08:57.512 "write": true, 00:08:57.512 "unmap": true, 00:08:57.512 "flush": true, 00:08:57.512 "reset": true, 00:08:57.512 "nvme_admin": false, 00:08:57.512 "nvme_io": false, 00:08:57.512 "nvme_io_md": false, 00:08:57.512 "write_zeroes": true, 00:08:57.512 "zcopy": false, 00:08:57.512 "get_zone_info": false, 00:08:57.512 "zone_management": false, 00:08:57.512 "zone_append": false, 00:08:57.512 "compare": false, 00:08:57.512 "compare_and_write": false, 00:08:57.512 "abort": false, 00:08:57.512 "seek_hole": false, 00:08:57.512 "seek_data": false, 00:08:57.512 "copy": false, 00:08:57.512 "nvme_iov_md": false 00:08:57.512 }, 00:08:57.512 "memory_domains": [ 00:08:57.512 { 00:08:57.512 "dma_device_id": "system", 00:08:57.512 "dma_device_type": 1 00:08:57.512 }, 00:08:57.512 { 00:08:57.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.512 "dma_device_type": 2 00:08:57.512 }, 00:08:57.512 { 00:08:57.512 "dma_device_id": "system", 00:08:57.512 "dma_device_type": 1 00:08:57.512 }, 00:08:57.512 { 00:08:57.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.512 "dma_device_type": 2 00:08:57.512 }, 00:08:57.512 { 00:08:57.512 "dma_device_id": "system", 00:08:57.512 "dma_device_type": 1 00:08:57.512 }, 00:08:57.512 { 00:08:57.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.512 "dma_device_type": 2 00:08:57.512 } 00:08:57.512 ], 00:08:57.512 "driver_specific": { 00:08:57.512 "raid": { 00:08:57.512 "uuid": "8a534ac5-6608-49a3-8038-98a4a1f3bdd8", 00:08:57.512 "strip_size_kb": 64, 00:08:57.512 "state": "online", 00:08:57.512 "raid_level": "concat", 00:08:57.513 "superblock": true, 00:08:57.513 "num_base_bdevs": 3, 00:08:57.513 "num_base_bdevs_discovered": 3, 00:08:57.513 "num_base_bdevs_operational": 3, 00:08:57.513 "base_bdevs_list": [ 00:08:57.513 { 00:08:57.513 "name": "NewBaseBdev", 00:08:57.513 "uuid": "d5b1265a-e814-4a1a-bfbc-95ad1b78a8b5", 00:08:57.513 "is_configured": true, 00:08:57.513 "data_offset": 2048, 00:08:57.513 "data_size": 63488 00:08:57.513 }, 00:08:57.513 { 00:08:57.513 "name": "BaseBdev2", 00:08:57.513 "uuid": "e8bae5dd-31d9-45cc-9f6f-ac47081ddf8a", 00:08:57.513 "is_configured": true, 00:08:57.513 "data_offset": 2048, 00:08:57.513 "data_size": 63488 00:08:57.513 }, 00:08:57.513 { 00:08:57.513 "name": "BaseBdev3", 00:08:57.513 "uuid": "f5b0aecd-df35-4710-86b4-3133338e458c", 00:08:57.513 "is_configured": true, 00:08:57.513 "data_offset": 2048, 00:08:57.513 "data_size": 63488 00:08:57.513 } 00:08:57.513 ] 00:08:57.513 } 00:08:57.513 } 00:08:57.513 }' 00:08:57.513 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.513 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:57.513 BaseBdev2 00:08:57.513 BaseBdev3' 00:08:57.513 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.773 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.774 [2024-11-18 10:37:23.569202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.774 [2024-11-18 10:37:23.569225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.774 [2024-11-18 10:37:23.569293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.774 [2024-11-18 10:37:23.569345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.774 [2024-11-18 10:37:23.569358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66110 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66110 ']' 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66110 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66110 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66110' 00:08:57.774 killing process with pid 66110 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66110 00:08:57.774 [2024-11-18 10:37:23.609436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.774 10:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66110 00:08:58.344 [2024-11-18 10:37:23.919604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.283 10:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:59.283 00:08:59.283 real 0m10.419s 00:08:59.283 user 0m16.346s 00:08:59.283 sys 0m1.898s 00:08:59.283 10:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.283 ************************************ 00:08:59.283 END TEST raid_state_function_test_sb 00:08:59.283 ************************************ 00:08:59.283 10:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.283 10:37:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:59.283 10:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:59.283 10:37:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.283 10:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.283 ************************************ 00:08:59.283 START TEST raid_superblock_test 00:08:59.283 ************************************ 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66725 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66725 00:08:59.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66725 ']' 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.283 10:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.543 [2024-11-18 10:37:25.226279] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:59.544 [2024-11-18 10:37:25.226463] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66725 ] 00:08:59.544 [2024-11-18 10:37:25.405078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.804 [2024-11-18 10:37:25.540313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.064 [2024-11-18 10:37:25.764540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.064 [2024-11-18 10:37:25.764676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.324 malloc1 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.324 [2024-11-18 10:37:26.109555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.324 [2024-11-18 10:37:26.109624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.324 [2024-11-18 10:37:26.109650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:00.324 [2024-11-18 10:37:26.109660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.324 [2024-11-18 10:37:26.112040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.324 [2024-11-18 10:37:26.112122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.324 pt1 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.324 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.325 malloc2 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.325 [2024-11-18 10:37:26.167360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.325 [2024-11-18 10:37:26.167450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.325 [2024-11-18 10:37:26.167489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:00.325 [2024-11-18 10:37:26.167517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.325 [2024-11-18 10:37:26.169811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.325 [2024-11-18 10:37:26.169879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.325 pt2 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.325 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.585 malloc3 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.585 [2024-11-18 10:37:26.244827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:00.585 [2024-11-18 10:37:26.244910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.585 [2024-11-18 10:37:26.244948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:00.585 [2024-11-18 10:37:26.244976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.585 [2024-11-18 10:37:26.247281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.585 [2024-11-18 10:37:26.247350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:00.585 pt3 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.585 [2024-11-18 10:37:26.256861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.585 [2024-11-18 10:37:26.258877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.585 [2024-11-18 10:37:26.258994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:00.585 [2024-11-18 10:37:26.259184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:00.585 [2024-11-18 10:37:26.259216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.585 [2024-11-18 10:37:26.259455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.585 [2024-11-18 10:37:26.259614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:00.585 [2024-11-18 10:37:26.259623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:00.585 [2024-11-18 10:37:26.259765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.585 "name": "raid_bdev1", 00:09:00.585 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:00.585 "strip_size_kb": 64, 00:09:00.585 "state": "online", 00:09:00.585 "raid_level": "concat", 00:09:00.585 "superblock": true, 00:09:00.585 "num_base_bdevs": 3, 00:09:00.585 "num_base_bdevs_discovered": 3, 00:09:00.585 "num_base_bdevs_operational": 3, 00:09:00.585 "base_bdevs_list": [ 00:09:00.585 { 00:09:00.585 "name": "pt1", 00:09:00.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.585 "is_configured": true, 00:09:00.585 "data_offset": 2048, 00:09:00.585 "data_size": 63488 00:09:00.585 }, 00:09:00.585 { 00:09:00.585 "name": "pt2", 00:09:00.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.585 "is_configured": true, 00:09:00.585 "data_offset": 2048, 00:09:00.585 "data_size": 63488 00:09:00.585 }, 00:09:00.585 { 00:09:00.585 "name": "pt3", 00:09:00.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.585 "is_configured": true, 00:09:00.585 "data_offset": 2048, 00:09:00.585 "data_size": 63488 00:09:00.585 } 00:09:00.585 ] 00:09:00.585 }' 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.585 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.846 [2024-11-18 10:37:26.664424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.846 "name": "raid_bdev1", 00:09:00.846 "aliases": [ 00:09:00.846 "b8d7473b-5db4-4445-808e-bd65c923a8b3" 00:09:00.846 ], 00:09:00.846 "product_name": "Raid Volume", 00:09:00.846 "block_size": 512, 00:09:00.846 "num_blocks": 190464, 00:09:00.846 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:00.846 "assigned_rate_limits": { 00:09:00.846 "rw_ios_per_sec": 0, 00:09:00.846 "rw_mbytes_per_sec": 0, 00:09:00.846 "r_mbytes_per_sec": 0, 00:09:00.846 "w_mbytes_per_sec": 0 00:09:00.846 }, 00:09:00.846 "claimed": false, 00:09:00.846 "zoned": false, 00:09:00.846 "supported_io_types": { 00:09:00.846 "read": true, 00:09:00.846 "write": true, 00:09:00.846 "unmap": true, 00:09:00.846 "flush": true, 00:09:00.846 "reset": true, 00:09:00.846 "nvme_admin": false, 00:09:00.846 "nvme_io": false, 00:09:00.846 "nvme_io_md": false, 00:09:00.846 "write_zeroes": true, 00:09:00.846 "zcopy": false, 00:09:00.846 "get_zone_info": false, 00:09:00.846 "zone_management": false, 00:09:00.846 "zone_append": false, 00:09:00.846 "compare": false, 00:09:00.846 "compare_and_write": false, 00:09:00.846 "abort": false, 00:09:00.846 "seek_hole": false, 00:09:00.846 "seek_data": false, 00:09:00.846 "copy": false, 00:09:00.846 "nvme_iov_md": false 00:09:00.846 }, 00:09:00.846 "memory_domains": [ 00:09:00.846 { 00:09:00.846 "dma_device_id": "system", 00:09:00.846 "dma_device_type": 1 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.846 "dma_device_type": 2 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "dma_device_id": "system", 00:09:00.846 "dma_device_type": 1 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.846 "dma_device_type": 2 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "dma_device_id": "system", 00:09:00.846 "dma_device_type": 1 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.846 "dma_device_type": 2 00:09:00.846 } 00:09:00.846 ], 00:09:00.846 "driver_specific": { 00:09:00.846 "raid": { 00:09:00.846 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:00.846 "strip_size_kb": 64, 00:09:00.846 "state": "online", 00:09:00.846 "raid_level": "concat", 00:09:00.846 "superblock": true, 00:09:00.846 "num_base_bdevs": 3, 00:09:00.846 "num_base_bdevs_discovered": 3, 00:09:00.846 "num_base_bdevs_operational": 3, 00:09:00.846 "base_bdevs_list": [ 00:09:00.846 { 00:09:00.846 "name": "pt1", 00:09:00.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.846 "is_configured": true, 00:09:00.846 "data_offset": 2048, 00:09:00.846 "data_size": 63488 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "name": "pt2", 00:09:00.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.846 "is_configured": true, 00:09:00.846 "data_offset": 2048, 00:09:00.846 "data_size": 63488 00:09:00.846 }, 00:09:00.846 { 00:09:00.846 "name": "pt3", 00:09:00.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.846 "is_configured": true, 00:09:00.846 "data_offset": 2048, 00:09:00.846 "data_size": 63488 00:09:00.846 } 00:09:00.846 ] 00:09:00.846 } 00:09:00.846 } 00:09:00.846 }' 00:09:00.846 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.106 pt2 00:09:01.106 pt3' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.106 [2024-11-18 10:37:26.963845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.106 10:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.366 10:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b8d7473b-5db4-4445-808e-bd65c923a8b3 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b8d7473b-5db4-4445-808e-bd65c923a8b3 ']' 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.366 [2024-11-18 10:37:27.007512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.366 [2024-11-18 10:37:27.007538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.366 [2024-11-18 10:37:27.007609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.366 [2024-11-18 10:37:27.007670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.366 [2024-11-18 10:37:27.007680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:01.366 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 [2024-11-18 10:37:27.155325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:01.367 [2024-11-18 10:37:27.157434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:01.367 [2024-11-18 10:37:27.157483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:01.367 [2024-11-18 10:37:27.157530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:01.367 [2024-11-18 10:37:27.157573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:01.367 [2024-11-18 10:37:27.157591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:01.367 [2024-11-18 10:37:27.157606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.367 [2024-11-18 10:37:27.157615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:01.367 request: 00:09:01.367 { 00:09:01.367 "name": "raid_bdev1", 00:09:01.367 "raid_level": "concat", 00:09:01.367 "base_bdevs": [ 00:09:01.367 "malloc1", 00:09:01.367 "malloc2", 00:09:01.367 "malloc3" 00:09:01.367 ], 00:09:01.367 "strip_size_kb": 64, 00:09:01.367 "superblock": false, 00:09:01.367 "method": "bdev_raid_create", 00:09:01.367 "req_id": 1 00:09:01.367 } 00:09:01.367 Got JSON-RPC error response 00:09:01.367 response: 00:09:01.367 { 00:09:01.367 "code": -17, 00:09:01.367 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:01.367 } 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 [2024-11-18 10:37:27.223291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:01.367 [2024-11-18 10:37:27.223373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.367 [2024-11-18 10:37:27.223415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:01.367 [2024-11-18 10:37:27.223444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.367 [2024-11-18 10:37:27.225872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.367 [2024-11-18 10:37:27.225940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:01.367 [2024-11-18 10:37:27.226034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:01.367 [2024-11-18 10:37:27.226107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:01.367 pt1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.626 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.626 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.626 "name": "raid_bdev1", 00:09:01.626 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:01.626 "strip_size_kb": 64, 00:09:01.626 "state": "configuring", 00:09:01.626 "raid_level": "concat", 00:09:01.626 "superblock": true, 00:09:01.626 "num_base_bdevs": 3, 00:09:01.626 "num_base_bdevs_discovered": 1, 00:09:01.626 "num_base_bdevs_operational": 3, 00:09:01.626 "base_bdevs_list": [ 00:09:01.626 { 00:09:01.626 "name": "pt1", 00:09:01.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.626 "is_configured": true, 00:09:01.626 "data_offset": 2048, 00:09:01.626 "data_size": 63488 00:09:01.626 }, 00:09:01.626 { 00:09:01.626 "name": null, 00:09:01.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.626 "is_configured": false, 00:09:01.626 "data_offset": 2048, 00:09:01.626 "data_size": 63488 00:09:01.626 }, 00:09:01.626 { 00:09:01.626 "name": null, 00:09:01.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.626 "is_configured": false, 00:09:01.626 "data_offset": 2048, 00:09:01.626 "data_size": 63488 00:09:01.626 } 00:09:01.626 ] 00:09:01.626 }' 00:09:01.626 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.626 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.885 [2024-11-18 10:37:27.626677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.885 [2024-11-18 10:37:27.626760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.885 [2024-11-18 10:37:27.626797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:01.885 [2024-11-18 10:37:27.626823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.885 [2024-11-18 10:37:27.627270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.885 [2024-11-18 10:37:27.627326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.885 [2024-11-18 10:37:27.627424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:01.885 [2024-11-18 10:37:27.627468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.885 pt2 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.885 [2024-11-18 10:37:27.634678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.885 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.885 "name": "raid_bdev1", 00:09:01.885 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:01.885 "strip_size_kb": 64, 00:09:01.885 "state": "configuring", 00:09:01.885 "raid_level": "concat", 00:09:01.885 "superblock": true, 00:09:01.885 "num_base_bdevs": 3, 00:09:01.885 "num_base_bdevs_discovered": 1, 00:09:01.885 "num_base_bdevs_operational": 3, 00:09:01.885 "base_bdevs_list": [ 00:09:01.885 { 00:09:01.885 "name": "pt1", 00:09:01.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.885 "is_configured": true, 00:09:01.885 "data_offset": 2048, 00:09:01.885 "data_size": 63488 00:09:01.885 }, 00:09:01.885 { 00:09:01.885 "name": null, 00:09:01.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.885 "is_configured": false, 00:09:01.885 "data_offset": 0, 00:09:01.885 "data_size": 63488 00:09:01.885 }, 00:09:01.885 { 00:09:01.885 "name": null, 00:09:01.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.885 "is_configured": false, 00:09:01.886 "data_offset": 2048, 00:09:01.886 "data_size": 63488 00:09:01.886 } 00:09:01.886 ] 00:09:01.886 }' 00:09:01.886 10:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.886 10:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.454 [2024-11-18 10:37:28.050012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.454 [2024-11-18 10:37:28.050103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.454 [2024-11-18 10:37:28.050134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:02.454 [2024-11-18 10:37:28.050163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.454 [2024-11-18 10:37:28.050580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.454 [2024-11-18 10:37:28.050645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.454 [2024-11-18 10:37:28.050741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:02.454 [2024-11-18 10:37:28.050792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.454 pt2 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.454 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.454 [2024-11-18 10:37:28.061987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:02.454 [2024-11-18 10:37:28.062042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.454 [2024-11-18 10:37:28.062055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:02.454 [2024-11-18 10:37:28.062064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.454 [2024-11-18 10:37:28.062445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.454 [2024-11-18 10:37:28.062479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:02.454 [2024-11-18 10:37:28.062535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:02.454 [2024-11-18 10:37:28.062554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:02.454 [2024-11-18 10:37:28.062658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.454 [2024-11-18 10:37:28.062670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.455 [2024-11-18 10:37:28.062914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:02.455 [2024-11-18 10:37:28.063085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.455 [2024-11-18 10:37:28.063097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.455 [2024-11-18 10:37:28.063305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.455 pt3 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.455 "name": "raid_bdev1", 00:09:02.455 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:02.455 "strip_size_kb": 64, 00:09:02.455 "state": "online", 00:09:02.455 "raid_level": "concat", 00:09:02.455 "superblock": true, 00:09:02.455 "num_base_bdevs": 3, 00:09:02.455 "num_base_bdevs_discovered": 3, 00:09:02.455 "num_base_bdevs_operational": 3, 00:09:02.455 "base_bdevs_list": [ 00:09:02.455 { 00:09:02.455 "name": "pt1", 00:09:02.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.455 "is_configured": true, 00:09:02.455 "data_offset": 2048, 00:09:02.455 "data_size": 63488 00:09:02.455 }, 00:09:02.455 { 00:09:02.455 "name": "pt2", 00:09:02.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.455 "is_configured": true, 00:09:02.455 "data_offset": 2048, 00:09:02.455 "data_size": 63488 00:09:02.455 }, 00:09:02.455 { 00:09:02.455 "name": "pt3", 00:09:02.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.455 "is_configured": true, 00:09:02.455 "data_offset": 2048, 00:09:02.455 "data_size": 63488 00:09:02.455 } 00:09:02.455 ] 00:09:02.455 }' 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.455 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.714 [2024-11-18 10:37:28.521481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.714 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.714 "name": "raid_bdev1", 00:09:02.714 "aliases": [ 00:09:02.714 "b8d7473b-5db4-4445-808e-bd65c923a8b3" 00:09:02.714 ], 00:09:02.714 "product_name": "Raid Volume", 00:09:02.714 "block_size": 512, 00:09:02.714 "num_blocks": 190464, 00:09:02.714 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:02.714 "assigned_rate_limits": { 00:09:02.714 "rw_ios_per_sec": 0, 00:09:02.714 "rw_mbytes_per_sec": 0, 00:09:02.714 "r_mbytes_per_sec": 0, 00:09:02.714 "w_mbytes_per_sec": 0 00:09:02.714 }, 00:09:02.714 "claimed": false, 00:09:02.714 "zoned": false, 00:09:02.714 "supported_io_types": { 00:09:02.714 "read": true, 00:09:02.714 "write": true, 00:09:02.714 "unmap": true, 00:09:02.714 "flush": true, 00:09:02.714 "reset": true, 00:09:02.714 "nvme_admin": false, 00:09:02.714 "nvme_io": false, 00:09:02.714 "nvme_io_md": false, 00:09:02.714 "write_zeroes": true, 00:09:02.714 "zcopy": false, 00:09:02.715 "get_zone_info": false, 00:09:02.715 "zone_management": false, 00:09:02.715 "zone_append": false, 00:09:02.715 "compare": false, 00:09:02.715 "compare_and_write": false, 00:09:02.715 "abort": false, 00:09:02.715 "seek_hole": false, 00:09:02.715 "seek_data": false, 00:09:02.715 "copy": false, 00:09:02.715 "nvme_iov_md": false 00:09:02.715 }, 00:09:02.715 "memory_domains": [ 00:09:02.715 { 00:09:02.715 "dma_device_id": "system", 00:09:02.715 "dma_device_type": 1 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.715 "dma_device_type": 2 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "dma_device_id": "system", 00:09:02.715 "dma_device_type": 1 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.715 "dma_device_type": 2 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "dma_device_id": "system", 00:09:02.715 "dma_device_type": 1 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.715 "dma_device_type": 2 00:09:02.715 } 00:09:02.715 ], 00:09:02.715 "driver_specific": { 00:09:02.715 "raid": { 00:09:02.715 "uuid": "b8d7473b-5db4-4445-808e-bd65c923a8b3", 00:09:02.715 "strip_size_kb": 64, 00:09:02.715 "state": "online", 00:09:02.715 "raid_level": "concat", 00:09:02.715 "superblock": true, 00:09:02.715 "num_base_bdevs": 3, 00:09:02.715 "num_base_bdevs_discovered": 3, 00:09:02.715 "num_base_bdevs_operational": 3, 00:09:02.715 "base_bdevs_list": [ 00:09:02.715 { 00:09:02.715 "name": "pt1", 00:09:02.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.715 "is_configured": true, 00:09:02.715 "data_offset": 2048, 00:09:02.715 "data_size": 63488 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "name": "pt2", 00:09:02.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.715 "is_configured": true, 00:09:02.715 "data_offset": 2048, 00:09:02.715 "data_size": 63488 00:09:02.715 }, 00:09:02.715 { 00:09:02.715 "name": "pt3", 00:09:02.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.715 "is_configured": true, 00:09:02.715 "data_offset": 2048, 00:09:02.715 "data_size": 63488 00:09:02.715 } 00:09:02.715 ] 00:09:02.715 } 00:09:02.715 } 00:09:02.715 }' 00:09:02.715 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:02.974 pt2 00:09:02.974 pt3' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.974 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.975 [2024-11-18 10:37:28.792981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b8d7473b-5db4-4445-808e-bd65c923a8b3 '!=' b8d7473b-5db4-4445-808e-bd65c923a8b3 ']' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66725 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66725 ']' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66725 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.975 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66725 00:09:03.234 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.234 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.234 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66725' 00:09:03.234 killing process with pid 66725 00:09:03.234 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66725 00:09:03.234 [2024-11-18 10:37:28.879204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.234 [2024-11-18 10:37:28.879350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.234 10:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66725 00:09:03.234 [2024-11-18 10:37:28.879447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.235 [2024-11-18 10:37:28.879462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:03.493 [2024-11-18 10:37:29.196215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.874 10:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:04.874 00:09:04.874 real 0m5.230s 00:09:04.874 user 0m7.305s 00:09:04.874 sys 0m1.029s 00:09:04.874 10:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.874 10:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.874 ************************************ 00:09:04.874 END TEST raid_superblock_test 00:09:04.874 ************************************ 00:09:04.874 10:37:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:04.874 10:37:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.874 10:37:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.874 10:37:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.874 ************************************ 00:09:04.874 START TEST raid_read_error_test 00:09:04.874 ************************************ 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.874 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gxSwyxiXFe 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66978 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66978 00:09:04.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66978 ']' 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.875 10:37:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.875 [2024-11-18 10:37:30.549382] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:04.875 [2024-11-18 10:37:30.549505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66978 ] 00:09:04.875 [2024-11-18 10:37:30.717460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.135 [2024-11-18 10:37:30.849917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.393 [2024-11-18 10:37:31.075369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.393 [2024-11-18 10:37:31.075427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.653 BaseBdev1_malloc 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.653 true 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.653 [2024-11-18 10:37:31.410842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:05.653 [2024-11-18 10:37:31.410950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.653 [2024-11-18 10:37:31.410981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:05.653 [2024-11-18 10:37:31.410993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.653 [2024-11-18 10:37:31.413371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.653 [2024-11-18 10:37:31.413408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:05.653 BaseBdev1 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.653 BaseBdev2_malloc 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.653 true 00:09:05.653 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.654 [2024-11-18 10:37:31.483976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:05.654 [2024-11-18 10:37:31.484030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.654 [2024-11-18 10:37:31.484045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:05.654 [2024-11-18 10:37:31.484056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.654 [2024-11-18 10:37:31.486421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.654 [2024-11-18 10:37:31.486456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:05.654 BaseBdev2 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.654 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.913 BaseBdev3_malloc 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.913 true 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.913 [2024-11-18 10:37:31.564158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:05.913 [2024-11-18 10:37:31.564216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.913 [2024-11-18 10:37:31.564244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:05.913 [2024-11-18 10:37:31.564255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.913 [2024-11-18 10:37:31.566553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.913 [2024-11-18 10:37:31.566591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:05.913 BaseBdev3 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.913 [2024-11-18 10:37:31.576227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.913 [2024-11-18 10:37:31.578248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.913 [2024-11-18 10:37:31.578327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.913 [2024-11-18 10:37:31.578524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.913 [2024-11-18 10:37:31.578540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.913 [2024-11-18 10:37:31.578776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:05.913 [2024-11-18 10:37:31.578931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.913 [2024-11-18 10:37:31.578945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:05.913 [2024-11-18 10:37:31.579103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.913 "name": "raid_bdev1", 00:09:05.913 "uuid": "bc619507-b425-41aa-bad6-80bc2cc5d896", 00:09:05.913 "strip_size_kb": 64, 00:09:05.913 "state": "online", 00:09:05.913 "raid_level": "concat", 00:09:05.913 "superblock": true, 00:09:05.913 "num_base_bdevs": 3, 00:09:05.913 "num_base_bdevs_discovered": 3, 00:09:05.913 "num_base_bdevs_operational": 3, 00:09:05.913 "base_bdevs_list": [ 00:09:05.913 { 00:09:05.913 "name": "BaseBdev1", 00:09:05.913 "uuid": "3c486fcb-4327-56c6-8341-69ca1b093c9d", 00:09:05.913 "is_configured": true, 00:09:05.913 "data_offset": 2048, 00:09:05.913 "data_size": 63488 00:09:05.913 }, 00:09:05.913 { 00:09:05.913 "name": "BaseBdev2", 00:09:05.913 "uuid": "e5f03be6-eb50-51a2-a426-e325c504c7c6", 00:09:05.913 "is_configured": true, 00:09:05.913 "data_offset": 2048, 00:09:05.913 "data_size": 63488 00:09:05.913 }, 00:09:05.913 { 00:09:05.913 "name": "BaseBdev3", 00:09:05.913 "uuid": "feee13f6-aac1-5022-b8f0-d24ae8f46058", 00:09:05.913 "is_configured": true, 00:09:05.913 "data_offset": 2048, 00:09:05.913 "data_size": 63488 00:09:05.913 } 00:09:05.913 ] 00:09:05.913 }' 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.913 10:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.173 10:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:06.173 10:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:06.433 [2024-11-18 10:37:32.120778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.373 "name": "raid_bdev1", 00:09:07.373 "uuid": "bc619507-b425-41aa-bad6-80bc2cc5d896", 00:09:07.373 "strip_size_kb": 64, 00:09:07.373 "state": "online", 00:09:07.373 "raid_level": "concat", 00:09:07.373 "superblock": true, 00:09:07.373 "num_base_bdevs": 3, 00:09:07.373 "num_base_bdevs_discovered": 3, 00:09:07.373 "num_base_bdevs_operational": 3, 00:09:07.373 "base_bdevs_list": [ 00:09:07.373 { 00:09:07.373 "name": "BaseBdev1", 00:09:07.373 "uuid": "3c486fcb-4327-56c6-8341-69ca1b093c9d", 00:09:07.373 "is_configured": true, 00:09:07.373 "data_offset": 2048, 00:09:07.373 "data_size": 63488 00:09:07.373 }, 00:09:07.373 { 00:09:07.373 "name": "BaseBdev2", 00:09:07.373 "uuid": "e5f03be6-eb50-51a2-a426-e325c504c7c6", 00:09:07.373 "is_configured": true, 00:09:07.373 "data_offset": 2048, 00:09:07.373 "data_size": 63488 00:09:07.373 }, 00:09:07.373 { 00:09:07.373 "name": "BaseBdev3", 00:09:07.373 "uuid": "feee13f6-aac1-5022-b8f0-d24ae8f46058", 00:09:07.373 "is_configured": true, 00:09:07.373 "data_offset": 2048, 00:09:07.373 "data_size": 63488 00:09:07.373 } 00:09:07.373 ] 00:09:07.373 }' 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.373 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.634 [2024-11-18 10:37:33.509345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.634 [2024-11-18 10:37:33.509396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.634 [2024-11-18 10:37:33.511971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.634 [2024-11-18 10:37:33.512017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.634 [2024-11-18 10:37:33.512059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.634 [2024-11-18 10:37:33.512072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:07.634 { 00:09:07.634 "results": [ 00:09:07.634 { 00:09:07.634 "job": "raid_bdev1", 00:09:07.634 "core_mask": "0x1", 00:09:07.634 "workload": "randrw", 00:09:07.634 "percentage": 50, 00:09:07.634 "status": "finished", 00:09:07.634 "queue_depth": 1, 00:09:07.634 "io_size": 131072, 00:09:07.634 "runtime": 1.389169, 00:09:07.634 "iops": 14608.73371058525, 00:09:07.634 "mibps": 1826.0917138231562, 00:09:07.634 "io_failed": 1, 00:09:07.634 "io_timeout": 0, 00:09:07.634 "avg_latency_us": 96.50090441102903, 00:09:07.634 "min_latency_us": 24.929257641921396, 00:09:07.634 "max_latency_us": 1337.907423580786 00:09:07.634 } 00:09:07.634 ], 00:09:07.634 "core_count": 1 00:09:07.634 } 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66978 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66978 ']' 00:09:07.634 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66978 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66978 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66978' 00:09:07.894 killing process with pid 66978 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66978 00:09:07.894 [2024-11-18 10:37:33.551350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.894 10:37:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66978 00:09:08.153 [2024-11-18 10:37:33.794346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gxSwyxiXFe 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:09.145 00:09:09.145 real 0m4.584s 00:09:09.145 user 0m5.294s 00:09:09.145 sys 0m0.671s 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.145 10:37:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.145 ************************************ 00:09:09.145 END TEST raid_read_error_test 00:09:09.145 ************************************ 00:09:09.407 10:37:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:09.407 10:37:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:09.407 10:37:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.407 10:37:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.407 ************************************ 00:09:09.407 START TEST raid_write_error_test 00:09:09.407 ************************************ 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4Dl0gCH2FU 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67129 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67129 00:09:09.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67129 ']' 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.407 10:37:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.407 [2024-11-18 10:37:35.208309] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:09.407 [2024-11-18 10:37:35.208496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67129 ] 00:09:09.667 [2024-11-18 10:37:35.386507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.668 [2024-11-18 10:37:35.516495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.927 [2024-11-18 10:37:35.739180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.927 [2024-11-18 10:37:35.739230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.187 BaseBdev1_malloc 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.187 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 true 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 [2024-11-18 10:37:36.084128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.448 [2024-11-18 10:37:36.084196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.448 [2024-11-18 10:37:36.084217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.448 [2024-11-18 10:37:36.084228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.448 [2024-11-18 10:37:36.086602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.448 [2024-11-18 10:37:36.086639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.448 BaseBdev1 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 BaseBdev2_malloc 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 true 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 [2024-11-18 10:37:36.157319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.448 [2024-11-18 10:37:36.157374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.448 [2024-11-18 10:37:36.157391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.448 [2024-11-18 10:37:36.157418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.448 [2024-11-18 10:37:36.159787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.448 [2024-11-18 10:37:36.159829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.448 BaseBdev2 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 BaseBdev3_malloc 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 true 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.448 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 [2024-11-18 10:37:36.239823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.448 [2024-11-18 10:37:36.239915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.448 [2024-11-18 10:37:36.239935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:10.448 [2024-11-18 10:37:36.239947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.448 [2024-11-18 10:37:36.242367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.449 [2024-11-18 10:37:36.242406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.449 BaseBdev3 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.449 [2024-11-18 10:37:36.251880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.449 [2024-11-18 10:37:36.253915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.449 [2024-11-18 10:37:36.253994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.449 [2024-11-18 10:37:36.254201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:10.449 [2024-11-18 10:37:36.254213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.449 [2024-11-18 10:37:36.254446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:10.449 [2024-11-18 10:37:36.254604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:10.449 [2024-11-18 10:37:36.254618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:10.449 [2024-11-18 10:37:36.254761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.449 "name": "raid_bdev1", 00:09:10.449 "uuid": "d38f7d2d-47cc-421b-8125-1b9245c67056", 00:09:10.449 "strip_size_kb": 64, 00:09:10.449 "state": "online", 00:09:10.449 "raid_level": "concat", 00:09:10.449 "superblock": true, 00:09:10.449 "num_base_bdevs": 3, 00:09:10.449 "num_base_bdevs_discovered": 3, 00:09:10.449 "num_base_bdevs_operational": 3, 00:09:10.449 "base_bdevs_list": [ 00:09:10.449 { 00:09:10.449 "name": "BaseBdev1", 00:09:10.449 "uuid": "9d0f1c26-e310-56f8-9fdd-0ed0153c40ca", 00:09:10.449 "is_configured": true, 00:09:10.449 "data_offset": 2048, 00:09:10.449 "data_size": 63488 00:09:10.449 }, 00:09:10.449 { 00:09:10.449 "name": "BaseBdev2", 00:09:10.449 "uuid": "a035f18e-6a1d-5e34-bc65-edb09f48a819", 00:09:10.449 "is_configured": true, 00:09:10.449 "data_offset": 2048, 00:09:10.449 "data_size": 63488 00:09:10.449 }, 00:09:10.449 { 00:09:10.449 "name": "BaseBdev3", 00:09:10.449 "uuid": "3ba531dc-605d-5410-9420-3f743f8e2963", 00:09:10.449 "is_configured": true, 00:09:10.449 "data_offset": 2048, 00:09:10.449 "data_size": 63488 00:09:10.449 } 00:09:10.449 ] 00:09:10.449 }' 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.449 10:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:11.017 10:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:11.017 [2024-11-18 10:37:36.768483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.956 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.956 "name": "raid_bdev1", 00:09:11.956 "uuid": "d38f7d2d-47cc-421b-8125-1b9245c67056", 00:09:11.956 "strip_size_kb": 64, 00:09:11.956 "state": "online", 00:09:11.956 "raid_level": "concat", 00:09:11.956 "superblock": true, 00:09:11.957 "num_base_bdevs": 3, 00:09:11.957 "num_base_bdevs_discovered": 3, 00:09:11.957 "num_base_bdevs_operational": 3, 00:09:11.957 "base_bdevs_list": [ 00:09:11.957 { 00:09:11.957 "name": "BaseBdev1", 00:09:11.957 "uuid": "9d0f1c26-e310-56f8-9fdd-0ed0153c40ca", 00:09:11.957 "is_configured": true, 00:09:11.957 "data_offset": 2048, 00:09:11.957 "data_size": 63488 00:09:11.957 }, 00:09:11.957 { 00:09:11.957 "name": "BaseBdev2", 00:09:11.957 "uuid": "a035f18e-6a1d-5e34-bc65-edb09f48a819", 00:09:11.957 "is_configured": true, 00:09:11.957 "data_offset": 2048, 00:09:11.957 "data_size": 63488 00:09:11.957 }, 00:09:11.957 { 00:09:11.957 "name": "BaseBdev3", 00:09:11.957 "uuid": "3ba531dc-605d-5410-9420-3f743f8e2963", 00:09:11.957 "is_configured": true, 00:09:11.957 "data_offset": 2048, 00:09:11.957 "data_size": 63488 00:09:11.957 } 00:09:11.957 ] 00:09:11.957 }' 00:09:11.957 10:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.957 10:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.526 [2024-11-18 10:37:38.132999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.526 [2024-11-18 10:37:38.133087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.526 [2024-11-18 10:37:38.135755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.526 [2024-11-18 10:37:38.135850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.526 [2024-11-18 10:37:38.135916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.526 [2024-11-18 10:37:38.135964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:12.526 { 00:09:12.526 "results": [ 00:09:12.526 { 00:09:12.526 "job": "raid_bdev1", 00:09:12.526 "core_mask": "0x1", 00:09:12.526 "workload": "randrw", 00:09:12.526 "percentage": 50, 00:09:12.526 "status": "finished", 00:09:12.526 "queue_depth": 1, 00:09:12.526 "io_size": 131072, 00:09:12.526 "runtime": 1.365215, 00:09:12.526 "iops": 14393.337313170454, 00:09:12.526 "mibps": 1799.1671641463067, 00:09:12.526 "io_failed": 1, 00:09:12.526 "io_timeout": 0, 00:09:12.526 "avg_latency_us": 97.93456514874516, 00:09:12.526 "min_latency_us": 25.041048034934498, 00:09:12.526 "max_latency_us": 1266.3615720524017 00:09:12.526 } 00:09:12.526 ], 00:09:12.526 "core_count": 1 00:09:12.526 } 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67129 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67129 ']' 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67129 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67129 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67129' 00:09:12.526 killing process with pid 67129 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67129 00:09:12.526 [2024-11-18 10:37:38.184106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.526 10:37:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67129 00:09:12.786 [2024-11-18 10:37:38.427822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.167 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4Dl0gCH2FU 00:09:14.167 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:14.167 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:14.167 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:14.167 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:14.168 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.168 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.168 ************************************ 00:09:14.168 END TEST raid_write_error_test 00:09:14.168 ************************************ 00:09:14.168 10:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:14.168 00:09:14.168 real 0m4.552s 00:09:14.168 user 0m5.278s 00:09:14.168 sys 0m0.644s 00:09:14.168 10:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.168 10:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 10:37:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:14.168 10:37:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:14.168 10:37:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.168 10:37:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.168 10:37:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 ************************************ 00:09:14.168 START TEST raid_state_function_test 00:09:14.168 ************************************ 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67267 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67267' 00:09:14.168 Process raid pid: 67267 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67267 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67267 ']' 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.168 10:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 [2024-11-18 10:37:39.817209] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:14.168 [2024-11-18 10:37:39.817374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.168 [2024-11-18 10:37:39.996350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.428 [2024-11-18 10:37:40.130027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.687 [2024-11-18 10:37:40.358314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.687 [2024-11-18 10:37:40.358450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 [2024-11-18 10:37:40.637592] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.947 [2024-11-18 10:37:40.637649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.947 [2024-11-18 10:37:40.637660] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.947 [2024-11-18 10:37:40.637669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.947 [2024-11-18 10:37:40.637675] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.947 [2024-11-18 10:37:40.637685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.947 "name": "Existed_Raid", 00:09:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.947 "strip_size_kb": 0, 00:09:14.947 "state": "configuring", 00:09:14.947 "raid_level": "raid1", 00:09:14.947 "superblock": false, 00:09:14.947 "num_base_bdevs": 3, 00:09:14.947 "num_base_bdevs_discovered": 0, 00:09:14.947 "num_base_bdevs_operational": 3, 00:09:14.947 "base_bdevs_list": [ 00:09:14.947 { 00:09:14.947 "name": "BaseBdev1", 00:09:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.947 "is_configured": false, 00:09:14.947 "data_offset": 0, 00:09:14.947 "data_size": 0 00:09:14.947 }, 00:09:14.947 { 00:09:14.947 "name": "BaseBdev2", 00:09:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.947 "is_configured": false, 00:09:14.947 "data_offset": 0, 00:09:14.947 "data_size": 0 00:09:14.947 }, 00:09:14.947 { 00:09:14.947 "name": "BaseBdev3", 00:09:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.947 "is_configured": false, 00:09:14.947 "data_offset": 0, 00:09:14.947 "data_size": 0 00:09:14.947 } 00:09:14.947 ] 00:09:14.947 }' 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.947 10:37:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.207 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.207 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.207 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.207 [2024-11-18 10:37:41.080797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.207 [2024-11-18 10:37:41.080870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:15.207 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.207 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.207 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.208 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.468 [2024-11-18 10:37:41.092778] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.468 [2024-11-18 10:37:41.092857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.468 [2024-11-18 10:37:41.092889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.468 [2024-11-18 10:37:41.092912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.468 [2024-11-18 10:37:41.092970] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.468 [2024-11-18 10:37:41.093006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.468 [2024-11-18 10:37:41.142605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.468 BaseBdev1 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.468 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.468 [ 00:09:15.468 { 00:09:15.468 "name": "BaseBdev1", 00:09:15.468 "aliases": [ 00:09:15.468 "388b2e81-2338-42de-a9c9-74a8e5d9e32b" 00:09:15.469 ], 00:09:15.469 "product_name": "Malloc disk", 00:09:15.469 "block_size": 512, 00:09:15.469 "num_blocks": 65536, 00:09:15.469 "uuid": "388b2e81-2338-42de-a9c9-74a8e5d9e32b", 00:09:15.469 "assigned_rate_limits": { 00:09:15.469 "rw_ios_per_sec": 0, 00:09:15.469 "rw_mbytes_per_sec": 0, 00:09:15.469 "r_mbytes_per_sec": 0, 00:09:15.469 "w_mbytes_per_sec": 0 00:09:15.469 }, 00:09:15.469 "claimed": true, 00:09:15.469 "claim_type": "exclusive_write", 00:09:15.469 "zoned": false, 00:09:15.469 "supported_io_types": { 00:09:15.469 "read": true, 00:09:15.469 "write": true, 00:09:15.469 "unmap": true, 00:09:15.469 "flush": true, 00:09:15.469 "reset": true, 00:09:15.469 "nvme_admin": false, 00:09:15.469 "nvme_io": false, 00:09:15.469 "nvme_io_md": false, 00:09:15.469 "write_zeroes": true, 00:09:15.469 "zcopy": true, 00:09:15.469 "get_zone_info": false, 00:09:15.469 "zone_management": false, 00:09:15.469 "zone_append": false, 00:09:15.469 "compare": false, 00:09:15.469 "compare_and_write": false, 00:09:15.469 "abort": true, 00:09:15.469 "seek_hole": false, 00:09:15.469 "seek_data": false, 00:09:15.469 "copy": true, 00:09:15.469 "nvme_iov_md": false 00:09:15.469 }, 00:09:15.469 "memory_domains": [ 00:09:15.469 { 00:09:15.469 "dma_device_id": "system", 00:09:15.469 "dma_device_type": 1 00:09:15.469 }, 00:09:15.469 { 00:09:15.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.469 "dma_device_type": 2 00:09:15.469 } 00:09:15.469 ], 00:09:15.469 "driver_specific": {} 00:09:15.469 } 00:09:15.469 ] 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.469 "name": "Existed_Raid", 00:09:15.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.469 "strip_size_kb": 0, 00:09:15.469 "state": "configuring", 00:09:15.469 "raid_level": "raid1", 00:09:15.469 "superblock": false, 00:09:15.469 "num_base_bdevs": 3, 00:09:15.469 "num_base_bdevs_discovered": 1, 00:09:15.469 "num_base_bdevs_operational": 3, 00:09:15.469 "base_bdevs_list": [ 00:09:15.469 { 00:09:15.469 "name": "BaseBdev1", 00:09:15.469 "uuid": "388b2e81-2338-42de-a9c9-74a8e5d9e32b", 00:09:15.469 "is_configured": true, 00:09:15.469 "data_offset": 0, 00:09:15.469 "data_size": 65536 00:09:15.469 }, 00:09:15.469 { 00:09:15.469 "name": "BaseBdev2", 00:09:15.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.469 "is_configured": false, 00:09:15.469 "data_offset": 0, 00:09:15.469 "data_size": 0 00:09:15.469 }, 00:09:15.469 { 00:09:15.469 "name": "BaseBdev3", 00:09:15.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.469 "is_configured": false, 00:09:15.469 "data_offset": 0, 00:09:15.469 "data_size": 0 00:09:15.469 } 00:09:15.469 ] 00:09:15.469 }' 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.469 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.729 [2024-11-18 10:37:41.605826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.729 [2024-11-18 10:37:41.605903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.729 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.989 [2024-11-18 10:37:41.617864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.989 [2024-11-18 10:37:41.619953] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.989 [2024-11-18 10:37:41.619994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.989 [2024-11-18 10:37:41.620004] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.989 [2024-11-18 10:37:41.620013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.989 "name": "Existed_Raid", 00:09:15.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.989 "strip_size_kb": 0, 00:09:15.989 "state": "configuring", 00:09:15.989 "raid_level": "raid1", 00:09:15.989 "superblock": false, 00:09:15.989 "num_base_bdevs": 3, 00:09:15.989 "num_base_bdevs_discovered": 1, 00:09:15.989 "num_base_bdevs_operational": 3, 00:09:15.989 "base_bdevs_list": [ 00:09:15.989 { 00:09:15.989 "name": "BaseBdev1", 00:09:15.989 "uuid": "388b2e81-2338-42de-a9c9-74a8e5d9e32b", 00:09:15.989 "is_configured": true, 00:09:15.989 "data_offset": 0, 00:09:15.989 "data_size": 65536 00:09:15.989 }, 00:09:15.989 { 00:09:15.989 "name": "BaseBdev2", 00:09:15.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.989 "is_configured": false, 00:09:15.989 "data_offset": 0, 00:09:15.989 "data_size": 0 00:09:15.989 }, 00:09:15.989 { 00:09:15.989 "name": "BaseBdev3", 00:09:15.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.989 "is_configured": false, 00:09:15.989 "data_offset": 0, 00:09:15.989 "data_size": 0 00:09:15.989 } 00:09:15.989 ] 00:09:15.989 }' 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.989 10:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.249 [2024-11-18 10:37:42.095808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.249 BaseBdev2 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.249 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.249 [ 00:09:16.249 { 00:09:16.249 "name": "BaseBdev2", 00:09:16.249 "aliases": [ 00:09:16.249 "fed2c7d7-dfd8-4144-9e6b-fe3ea86c754f" 00:09:16.249 ], 00:09:16.249 "product_name": "Malloc disk", 00:09:16.249 "block_size": 512, 00:09:16.249 "num_blocks": 65536, 00:09:16.249 "uuid": "fed2c7d7-dfd8-4144-9e6b-fe3ea86c754f", 00:09:16.249 "assigned_rate_limits": { 00:09:16.249 "rw_ios_per_sec": 0, 00:09:16.249 "rw_mbytes_per_sec": 0, 00:09:16.249 "r_mbytes_per_sec": 0, 00:09:16.249 "w_mbytes_per_sec": 0 00:09:16.249 }, 00:09:16.249 "claimed": true, 00:09:16.249 "claim_type": "exclusive_write", 00:09:16.250 "zoned": false, 00:09:16.250 "supported_io_types": { 00:09:16.250 "read": true, 00:09:16.250 "write": true, 00:09:16.250 "unmap": true, 00:09:16.250 "flush": true, 00:09:16.250 "reset": true, 00:09:16.250 "nvme_admin": false, 00:09:16.250 "nvme_io": false, 00:09:16.250 "nvme_io_md": false, 00:09:16.250 "write_zeroes": true, 00:09:16.250 "zcopy": true, 00:09:16.250 "get_zone_info": false, 00:09:16.250 "zone_management": false, 00:09:16.250 "zone_append": false, 00:09:16.250 "compare": false, 00:09:16.250 "compare_and_write": false, 00:09:16.250 "abort": true, 00:09:16.250 "seek_hole": false, 00:09:16.250 "seek_data": false, 00:09:16.250 "copy": true, 00:09:16.250 "nvme_iov_md": false 00:09:16.250 }, 00:09:16.250 "memory_domains": [ 00:09:16.250 { 00:09:16.250 "dma_device_id": "system", 00:09:16.250 "dma_device_type": 1 00:09:16.250 }, 00:09:16.250 { 00:09:16.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.250 "dma_device_type": 2 00:09:16.250 } 00:09:16.250 ], 00:09:16.250 "driver_specific": {} 00:09:16.250 } 00:09:16.250 ] 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.510 "name": "Existed_Raid", 00:09:16.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.510 "strip_size_kb": 0, 00:09:16.510 "state": "configuring", 00:09:16.510 "raid_level": "raid1", 00:09:16.510 "superblock": false, 00:09:16.510 "num_base_bdevs": 3, 00:09:16.510 "num_base_bdevs_discovered": 2, 00:09:16.510 "num_base_bdevs_operational": 3, 00:09:16.510 "base_bdevs_list": [ 00:09:16.510 { 00:09:16.510 "name": "BaseBdev1", 00:09:16.510 "uuid": "388b2e81-2338-42de-a9c9-74a8e5d9e32b", 00:09:16.510 "is_configured": true, 00:09:16.510 "data_offset": 0, 00:09:16.510 "data_size": 65536 00:09:16.510 }, 00:09:16.510 { 00:09:16.510 "name": "BaseBdev2", 00:09:16.510 "uuid": "fed2c7d7-dfd8-4144-9e6b-fe3ea86c754f", 00:09:16.510 "is_configured": true, 00:09:16.510 "data_offset": 0, 00:09:16.510 "data_size": 65536 00:09:16.510 }, 00:09:16.510 { 00:09:16.510 "name": "BaseBdev3", 00:09:16.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.510 "is_configured": false, 00:09:16.510 "data_offset": 0, 00:09:16.510 "data_size": 0 00:09:16.510 } 00:09:16.510 ] 00:09:16.510 }' 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.510 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.770 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.770 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.770 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.031 [2024-11-18 10:37:42.685430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.031 [2024-11-18 10:37:42.685480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.031 [2024-11-18 10:37:42.685496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:17.031 [2024-11-18 10:37:42.685791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:17.031 [2024-11-18 10:37:42.685980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.031 [2024-11-18 10:37:42.685989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:17.031 [2024-11-18 10:37:42.686270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.031 BaseBdev3 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.031 [ 00:09:17.031 { 00:09:17.031 "name": "BaseBdev3", 00:09:17.031 "aliases": [ 00:09:17.031 "6ef6a329-f930-4133-b981-25924b9bc78d" 00:09:17.031 ], 00:09:17.031 "product_name": "Malloc disk", 00:09:17.031 "block_size": 512, 00:09:17.031 "num_blocks": 65536, 00:09:17.031 "uuid": "6ef6a329-f930-4133-b981-25924b9bc78d", 00:09:17.031 "assigned_rate_limits": { 00:09:17.031 "rw_ios_per_sec": 0, 00:09:17.031 "rw_mbytes_per_sec": 0, 00:09:17.031 "r_mbytes_per_sec": 0, 00:09:17.031 "w_mbytes_per_sec": 0 00:09:17.031 }, 00:09:17.031 "claimed": true, 00:09:17.031 "claim_type": "exclusive_write", 00:09:17.031 "zoned": false, 00:09:17.031 "supported_io_types": { 00:09:17.031 "read": true, 00:09:17.031 "write": true, 00:09:17.031 "unmap": true, 00:09:17.031 "flush": true, 00:09:17.031 "reset": true, 00:09:17.031 "nvme_admin": false, 00:09:17.031 "nvme_io": false, 00:09:17.031 "nvme_io_md": false, 00:09:17.031 "write_zeroes": true, 00:09:17.031 "zcopy": true, 00:09:17.031 "get_zone_info": false, 00:09:17.031 "zone_management": false, 00:09:17.031 "zone_append": false, 00:09:17.031 "compare": false, 00:09:17.031 "compare_and_write": false, 00:09:17.031 "abort": true, 00:09:17.031 "seek_hole": false, 00:09:17.031 "seek_data": false, 00:09:17.031 "copy": true, 00:09:17.031 "nvme_iov_md": false 00:09:17.031 }, 00:09:17.031 "memory_domains": [ 00:09:17.031 { 00:09:17.031 "dma_device_id": "system", 00:09:17.031 "dma_device_type": 1 00:09:17.031 }, 00:09:17.031 { 00:09:17.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.031 "dma_device_type": 2 00:09:17.031 } 00:09:17.031 ], 00:09:17.031 "driver_specific": {} 00:09:17.031 } 00:09:17.031 ] 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.031 "name": "Existed_Raid", 00:09:17.031 "uuid": "27a3e561-b65a-44df-ad2c-e77eb6445ea4", 00:09:17.031 "strip_size_kb": 0, 00:09:17.031 "state": "online", 00:09:17.031 "raid_level": "raid1", 00:09:17.031 "superblock": false, 00:09:17.031 "num_base_bdevs": 3, 00:09:17.031 "num_base_bdevs_discovered": 3, 00:09:17.031 "num_base_bdevs_operational": 3, 00:09:17.031 "base_bdevs_list": [ 00:09:17.031 { 00:09:17.031 "name": "BaseBdev1", 00:09:17.031 "uuid": "388b2e81-2338-42de-a9c9-74a8e5d9e32b", 00:09:17.031 "is_configured": true, 00:09:17.031 "data_offset": 0, 00:09:17.031 "data_size": 65536 00:09:17.031 }, 00:09:17.031 { 00:09:17.031 "name": "BaseBdev2", 00:09:17.031 "uuid": "fed2c7d7-dfd8-4144-9e6b-fe3ea86c754f", 00:09:17.031 "is_configured": true, 00:09:17.031 "data_offset": 0, 00:09:17.031 "data_size": 65536 00:09:17.031 }, 00:09:17.031 { 00:09:17.031 "name": "BaseBdev3", 00:09:17.031 "uuid": "6ef6a329-f930-4133-b981-25924b9bc78d", 00:09:17.031 "is_configured": true, 00:09:17.031 "data_offset": 0, 00:09:17.031 "data_size": 65536 00:09:17.031 } 00:09:17.031 ] 00:09:17.031 }' 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.031 10:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 [2024-11-18 10:37:43.109109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.292 "name": "Existed_Raid", 00:09:17.292 "aliases": [ 00:09:17.292 "27a3e561-b65a-44df-ad2c-e77eb6445ea4" 00:09:17.292 ], 00:09:17.292 "product_name": "Raid Volume", 00:09:17.292 "block_size": 512, 00:09:17.292 "num_blocks": 65536, 00:09:17.292 "uuid": "27a3e561-b65a-44df-ad2c-e77eb6445ea4", 00:09:17.292 "assigned_rate_limits": { 00:09:17.292 "rw_ios_per_sec": 0, 00:09:17.292 "rw_mbytes_per_sec": 0, 00:09:17.292 "r_mbytes_per_sec": 0, 00:09:17.292 "w_mbytes_per_sec": 0 00:09:17.292 }, 00:09:17.292 "claimed": false, 00:09:17.292 "zoned": false, 00:09:17.292 "supported_io_types": { 00:09:17.292 "read": true, 00:09:17.292 "write": true, 00:09:17.292 "unmap": false, 00:09:17.292 "flush": false, 00:09:17.292 "reset": true, 00:09:17.292 "nvme_admin": false, 00:09:17.292 "nvme_io": false, 00:09:17.292 "nvme_io_md": false, 00:09:17.292 "write_zeroes": true, 00:09:17.292 "zcopy": false, 00:09:17.292 "get_zone_info": false, 00:09:17.292 "zone_management": false, 00:09:17.292 "zone_append": false, 00:09:17.292 "compare": false, 00:09:17.292 "compare_and_write": false, 00:09:17.292 "abort": false, 00:09:17.292 "seek_hole": false, 00:09:17.292 "seek_data": false, 00:09:17.292 "copy": false, 00:09:17.292 "nvme_iov_md": false 00:09:17.292 }, 00:09:17.292 "memory_domains": [ 00:09:17.292 { 00:09:17.292 "dma_device_id": "system", 00:09:17.292 "dma_device_type": 1 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.292 "dma_device_type": 2 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "dma_device_id": "system", 00:09:17.292 "dma_device_type": 1 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.292 "dma_device_type": 2 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "dma_device_id": "system", 00:09:17.292 "dma_device_type": 1 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.292 "dma_device_type": 2 00:09:17.292 } 00:09:17.292 ], 00:09:17.292 "driver_specific": { 00:09:17.292 "raid": { 00:09:17.292 "uuid": "27a3e561-b65a-44df-ad2c-e77eb6445ea4", 00:09:17.292 "strip_size_kb": 0, 00:09:17.292 "state": "online", 00:09:17.292 "raid_level": "raid1", 00:09:17.292 "superblock": false, 00:09:17.292 "num_base_bdevs": 3, 00:09:17.292 "num_base_bdevs_discovered": 3, 00:09:17.292 "num_base_bdevs_operational": 3, 00:09:17.292 "base_bdevs_list": [ 00:09:17.292 { 00:09:17.292 "name": "BaseBdev1", 00:09:17.292 "uuid": "388b2e81-2338-42de-a9c9-74a8e5d9e32b", 00:09:17.292 "is_configured": true, 00:09:17.292 "data_offset": 0, 00:09:17.292 "data_size": 65536 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "name": "BaseBdev2", 00:09:17.292 "uuid": "fed2c7d7-dfd8-4144-9e6b-fe3ea86c754f", 00:09:17.292 "is_configured": true, 00:09:17.292 "data_offset": 0, 00:09:17.292 "data_size": 65536 00:09:17.292 }, 00:09:17.292 { 00:09:17.292 "name": "BaseBdev3", 00:09:17.292 "uuid": "6ef6a329-f930-4133-b981-25924b9bc78d", 00:09:17.292 "is_configured": true, 00:09:17.292 "data_offset": 0, 00:09:17.292 "data_size": 65536 00:09:17.292 } 00:09:17.292 ] 00:09:17.292 } 00:09:17.292 } 00:09:17.292 }' 00:09:17.292 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:17.556 BaseBdev2 00:09:17.556 BaseBdev3' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.556 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.556 [2024-11-18 10:37:43.380348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.821 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.821 "name": "Existed_Raid", 00:09:17.821 "uuid": "27a3e561-b65a-44df-ad2c-e77eb6445ea4", 00:09:17.821 "strip_size_kb": 0, 00:09:17.821 "state": "online", 00:09:17.821 "raid_level": "raid1", 00:09:17.821 "superblock": false, 00:09:17.821 "num_base_bdevs": 3, 00:09:17.821 "num_base_bdevs_discovered": 2, 00:09:17.821 "num_base_bdevs_operational": 2, 00:09:17.821 "base_bdevs_list": [ 00:09:17.821 { 00:09:17.821 "name": null, 00:09:17.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.821 "is_configured": false, 00:09:17.821 "data_offset": 0, 00:09:17.821 "data_size": 65536 00:09:17.821 }, 00:09:17.821 { 00:09:17.821 "name": "BaseBdev2", 00:09:17.821 "uuid": "fed2c7d7-dfd8-4144-9e6b-fe3ea86c754f", 00:09:17.821 "is_configured": true, 00:09:17.821 "data_offset": 0, 00:09:17.821 "data_size": 65536 00:09:17.821 }, 00:09:17.821 { 00:09:17.822 "name": "BaseBdev3", 00:09:17.822 "uuid": "6ef6a329-f930-4133-b981-25924b9bc78d", 00:09:17.822 "is_configured": true, 00:09:17.822 "data_offset": 0, 00:09:17.822 "data_size": 65536 00:09:17.822 } 00:09:17.822 ] 00:09:17.822 }' 00:09:17.822 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.822 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.088 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 [2024-11-18 10:37:43.952423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.349 [2024-11-18 10:37:44.108682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.349 [2024-11-18 10:37:44.108793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.349 [2024-11-18 10:37:44.207165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.349 [2024-11-18 10:37:44.207300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.349 [2024-11-18 10:37:44.207345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.349 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.610 BaseBdev2 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.610 [ 00:09:18.610 { 00:09:18.610 "name": "BaseBdev2", 00:09:18.610 "aliases": [ 00:09:18.610 "060a7f57-82d0-492f-bc53-ac9832cda77f" 00:09:18.610 ], 00:09:18.610 "product_name": "Malloc disk", 00:09:18.610 "block_size": 512, 00:09:18.610 "num_blocks": 65536, 00:09:18.610 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:18.610 "assigned_rate_limits": { 00:09:18.610 "rw_ios_per_sec": 0, 00:09:18.610 "rw_mbytes_per_sec": 0, 00:09:18.610 "r_mbytes_per_sec": 0, 00:09:18.610 "w_mbytes_per_sec": 0 00:09:18.610 }, 00:09:18.610 "claimed": false, 00:09:18.610 "zoned": false, 00:09:18.610 "supported_io_types": { 00:09:18.610 "read": true, 00:09:18.610 "write": true, 00:09:18.610 "unmap": true, 00:09:18.610 "flush": true, 00:09:18.610 "reset": true, 00:09:18.610 "nvme_admin": false, 00:09:18.610 "nvme_io": false, 00:09:18.610 "nvme_io_md": false, 00:09:18.610 "write_zeroes": true, 00:09:18.610 "zcopy": true, 00:09:18.610 "get_zone_info": false, 00:09:18.610 "zone_management": false, 00:09:18.610 "zone_append": false, 00:09:18.610 "compare": false, 00:09:18.610 "compare_and_write": false, 00:09:18.610 "abort": true, 00:09:18.610 "seek_hole": false, 00:09:18.610 "seek_data": false, 00:09:18.610 "copy": true, 00:09:18.610 "nvme_iov_md": false 00:09:18.610 }, 00:09:18.610 "memory_domains": [ 00:09:18.610 { 00:09:18.610 "dma_device_id": "system", 00:09:18.610 "dma_device_type": 1 00:09:18.610 }, 00:09:18.610 { 00:09:18.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.610 "dma_device_type": 2 00:09:18.610 } 00:09:18.610 ], 00:09:18.610 "driver_specific": {} 00:09:18.610 } 00:09:18.610 ] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.610 BaseBdev3 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.610 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.610 [ 00:09:18.610 { 00:09:18.610 "name": "BaseBdev3", 00:09:18.610 "aliases": [ 00:09:18.610 "4bdd1ee9-5534-4bdb-8c8f-22358b627da4" 00:09:18.610 ], 00:09:18.610 "product_name": "Malloc disk", 00:09:18.610 "block_size": 512, 00:09:18.610 "num_blocks": 65536, 00:09:18.610 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:18.610 "assigned_rate_limits": { 00:09:18.610 "rw_ios_per_sec": 0, 00:09:18.610 "rw_mbytes_per_sec": 0, 00:09:18.610 "r_mbytes_per_sec": 0, 00:09:18.610 "w_mbytes_per_sec": 0 00:09:18.610 }, 00:09:18.610 "claimed": false, 00:09:18.610 "zoned": false, 00:09:18.610 "supported_io_types": { 00:09:18.610 "read": true, 00:09:18.610 "write": true, 00:09:18.610 "unmap": true, 00:09:18.610 "flush": true, 00:09:18.610 "reset": true, 00:09:18.610 "nvme_admin": false, 00:09:18.610 "nvme_io": false, 00:09:18.610 "nvme_io_md": false, 00:09:18.610 "write_zeroes": true, 00:09:18.610 "zcopy": true, 00:09:18.610 "get_zone_info": false, 00:09:18.610 "zone_management": false, 00:09:18.610 "zone_append": false, 00:09:18.610 "compare": false, 00:09:18.610 "compare_and_write": false, 00:09:18.610 "abort": true, 00:09:18.610 "seek_hole": false, 00:09:18.610 "seek_data": false, 00:09:18.610 "copy": true, 00:09:18.610 "nvme_iov_md": false 00:09:18.610 }, 00:09:18.610 "memory_domains": [ 00:09:18.610 { 00:09:18.610 "dma_device_id": "system", 00:09:18.610 "dma_device_type": 1 00:09:18.610 }, 00:09:18.610 { 00:09:18.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.610 "dma_device_type": 2 00:09:18.610 } 00:09:18.610 ], 00:09:18.610 "driver_specific": {} 00:09:18.610 } 00:09:18.611 ] 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 [2024-11-18 10:37:44.429474] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.611 [2024-11-18 10:37:44.429559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.611 [2024-11-18 10:37:44.429614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.611 [2024-11-18 10:37:44.431686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.611 "name": "Existed_Raid", 00:09:18.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.611 "strip_size_kb": 0, 00:09:18.611 "state": "configuring", 00:09:18.611 "raid_level": "raid1", 00:09:18.611 "superblock": false, 00:09:18.611 "num_base_bdevs": 3, 00:09:18.611 "num_base_bdevs_discovered": 2, 00:09:18.611 "num_base_bdevs_operational": 3, 00:09:18.611 "base_bdevs_list": [ 00:09:18.611 { 00:09:18.611 "name": "BaseBdev1", 00:09:18.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.611 "is_configured": false, 00:09:18.611 "data_offset": 0, 00:09:18.611 "data_size": 0 00:09:18.611 }, 00:09:18.611 { 00:09:18.611 "name": "BaseBdev2", 00:09:18.611 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:18.611 "is_configured": true, 00:09:18.611 "data_offset": 0, 00:09:18.611 "data_size": 65536 00:09:18.611 }, 00:09:18.611 { 00:09:18.611 "name": "BaseBdev3", 00:09:18.611 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:18.611 "is_configured": true, 00:09:18.611 "data_offset": 0, 00:09:18.611 "data_size": 65536 00:09:18.611 } 00:09:18.611 ] 00:09:18.611 }' 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.611 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.181 [2024-11-18 10:37:44.888662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.181 "name": "Existed_Raid", 00:09:19.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.181 "strip_size_kb": 0, 00:09:19.181 "state": "configuring", 00:09:19.181 "raid_level": "raid1", 00:09:19.181 "superblock": false, 00:09:19.181 "num_base_bdevs": 3, 00:09:19.181 "num_base_bdevs_discovered": 1, 00:09:19.181 "num_base_bdevs_operational": 3, 00:09:19.181 "base_bdevs_list": [ 00:09:19.181 { 00:09:19.181 "name": "BaseBdev1", 00:09:19.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.181 "is_configured": false, 00:09:19.181 "data_offset": 0, 00:09:19.181 "data_size": 0 00:09:19.181 }, 00:09:19.181 { 00:09:19.181 "name": null, 00:09:19.181 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:19.181 "is_configured": false, 00:09:19.181 "data_offset": 0, 00:09:19.181 "data_size": 65536 00:09:19.181 }, 00:09:19.181 { 00:09:19.181 "name": "BaseBdev3", 00:09:19.181 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:19.181 "is_configured": true, 00:09:19.181 "data_offset": 0, 00:09:19.181 "data_size": 65536 00:09:19.181 } 00:09:19.181 ] 00:09:19.181 }' 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.181 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 [2024-11-18 10:37:45.405274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.751 BaseBdev1 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 [ 00:09:19.751 { 00:09:19.751 "name": "BaseBdev1", 00:09:19.751 "aliases": [ 00:09:19.751 "6d8487d2-cb5b-4167-bc53-1cc741da4d5e" 00:09:19.751 ], 00:09:19.751 "product_name": "Malloc disk", 00:09:19.751 "block_size": 512, 00:09:19.751 "num_blocks": 65536, 00:09:19.751 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:19.751 "assigned_rate_limits": { 00:09:19.751 "rw_ios_per_sec": 0, 00:09:19.751 "rw_mbytes_per_sec": 0, 00:09:19.751 "r_mbytes_per_sec": 0, 00:09:19.751 "w_mbytes_per_sec": 0 00:09:19.751 }, 00:09:19.751 "claimed": true, 00:09:19.751 "claim_type": "exclusive_write", 00:09:19.751 "zoned": false, 00:09:19.751 "supported_io_types": { 00:09:19.751 "read": true, 00:09:19.751 "write": true, 00:09:19.751 "unmap": true, 00:09:19.751 "flush": true, 00:09:19.751 "reset": true, 00:09:19.751 "nvme_admin": false, 00:09:19.751 "nvme_io": false, 00:09:19.751 "nvme_io_md": false, 00:09:19.751 "write_zeroes": true, 00:09:19.751 "zcopy": true, 00:09:19.751 "get_zone_info": false, 00:09:19.751 "zone_management": false, 00:09:19.751 "zone_append": false, 00:09:19.751 "compare": false, 00:09:19.751 "compare_and_write": false, 00:09:19.751 "abort": true, 00:09:19.751 "seek_hole": false, 00:09:19.751 "seek_data": false, 00:09:19.751 "copy": true, 00:09:19.751 "nvme_iov_md": false 00:09:19.751 }, 00:09:19.751 "memory_domains": [ 00:09:19.751 { 00:09:19.751 "dma_device_id": "system", 00:09:19.751 "dma_device_type": 1 00:09:19.751 }, 00:09:19.751 { 00:09:19.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.751 "dma_device_type": 2 00:09:19.751 } 00:09:19.751 ], 00:09:19.751 "driver_specific": {} 00:09:19.751 } 00:09:19.751 ] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.751 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.751 "name": "Existed_Raid", 00:09:19.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.751 "strip_size_kb": 0, 00:09:19.751 "state": "configuring", 00:09:19.751 "raid_level": "raid1", 00:09:19.751 "superblock": false, 00:09:19.751 "num_base_bdevs": 3, 00:09:19.751 "num_base_bdevs_discovered": 2, 00:09:19.751 "num_base_bdevs_operational": 3, 00:09:19.751 "base_bdevs_list": [ 00:09:19.751 { 00:09:19.751 "name": "BaseBdev1", 00:09:19.752 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:19.752 "is_configured": true, 00:09:19.752 "data_offset": 0, 00:09:19.752 "data_size": 65536 00:09:19.752 }, 00:09:19.752 { 00:09:19.752 "name": null, 00:09:19.752 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:19.752 "is_configured": false, 00:09:19.752 "data_offset": 0, 00:09:19.752 "data_size": 65536 00:09:19.752 }, 00:09:19.752 { 00:09:19.752 "name": "BaseBdev3", 00:09:19.752 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:19.752 "is_configured": true, 00:09:19.752 "data_offset": 0, 00:09:19.752 "data_size": 65536 00:09:19.752 } 00:09:19.752 ] 00:09:19.752 }' 00:09:19.752 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.752 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.321 [2024-11-18 10:37:45.928358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.321 "name": "Existed_Raid", 00:09:20.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.321 "strip_size_kb": 0, 00:09:20.321 "state": "configuring", 00:09:20.321 "raid_level": "raid1", 00:09:20.321 "superblock": false, 00:09:20.321 "num_base_bdevs": 3, 00:09:20.321 "num_base_bdevs_discovered": 1, 00:09:20.321 "num_base_bdevs_operational": 3, 00:09:20.321 "base_bdevs_list": [ 00:09:20.321 { 00:09:20.321 "name": "BaseBdev1", 00:09:20.321 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:20.321 "is_configured": true, 00:09:20.321 "data_offset": 0, 00:09:20.321 "data_size": 65536 00:09:20.321 }, 00:09:20.321 { 00:09:20.321 "name": null, 00:09:20.321 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:20.321 "is_configured": false, 00:09:20.321 "data_offset": 0, 00:09:20.321 "data_size": 65536 00:09:20.321 }, 00:09:20.321 { 00:09:20.321 "name": null, 00:09:20.321 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:20.321 "is_configured": false, 00:09:20.321 "data_offset": 0, 00:09:20.321 "data_size": 65536 00:09:20.321 } 00:09:20.321 ] 00:09:20.321 }' 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.321 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 [2024-11-18 10:37:46.423546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.581 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.840 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.840 "name": "Existed_Raid", 00:09:20.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.840 "strip_size_kb": 0, 00:09:20.840 "state": "configuring", 00:09:20.840 "raid_level": "raid1", 00:09:20.840 "superblock": false, 00:09:20.840 "num_base_bdevs": 3, 00:09:20.840 "num_base_bdevs_discovered": 2, 00:09:20.840 "num_base_bdevs_operational": 3, 00:09:20.840 "base_bdevs_list": [ 00:09:20.840 { 00:09:20.840 "name": "BaseBdev1", 00:09:20.840 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:20.840 "is_configured": true, 00:09:20.840 "data_offset": 0, 00:09:20.840 "data_size": 65536 00:09:20.840 }, 00:09:20.840 { 00:09:20.840 "name": null, 00:09:20.840 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:20.840 "is_configured": false, 00:09:20.840 "data_offset": 0, 00:09:20.840 "data_size": 65536 00:09:20.840 }, 00:09:20.840 { 00:09:20.840 "name": "BaseBdev3", 00:09:20.840 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:20.840 "is_configured": true, 00:09:20.840 "data_offset": 0, 00:09:20.840 "data_size": 65536 00:09:20.840 } 00:09:20.840 ] 00:09:20.840 }' 00:09:20.840 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.840 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.099 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 [2024-11-18 10:37:46.910733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.358 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.358 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.359 "name": "Existed_Raid", 00:09:21.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.359 "strip_size_kb": 0, 00:09:21.359 "state": "configuring", 00:09:21.359 "raid_level": "raid1", 00:09:21.359 "superblock": false, 00:09:21.359 "num_base_bdevs": 3, 00:09:21.359 "num_base_bdevs_discovered": 1, 00:09:21.359 "num_base_bdevs_operational": 3, 00:09:21.359 "base_bdevs_list": [ 00:09:21.359 { 00:09:21.359 "name": null, 00:09:21.359 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:21.359 "is_configured": false, 00:09:21.359 "data_offset": 0, 00:09:21.359 "data_size": 65536 00:09:21.359 }, 00:09:21.359 { 00:09:21.359 "name": null, 00:09:21.359 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:21.359 "is_configured": false, 00:09:21.359 "data_offset": 0, 00:09:21.359 "data_size": 65536 00:09:21.359 }, 00:09:21.359 { 00:09:21.359 "name": "BaseBdev3", 00:09:21.359 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:21.359 "is_configured": true, 00:09:21.359 "data_offset": 0, 00:09:21.359 "data_size": 65536 00:09:21.359 } 00:09:21.359 ] 00:09:21.359 }' 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.359 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.618 [2024-11-18 10:37:47.465297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.618 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.877 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.877 "name": "Existed_Raid", 00:09:21.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.877 "strip_size_kb": 0, 00:09:21.877 "state": "configuring", 00:09:21.877 "raid_level": "raid1", 00:09:21.877 "superblock": false, 00:09:21.877 "num_base_bdevs": 3, 00:09:21.877 "num_base_bdevs_discovered": 2, 00:09:21.877 "num_base_bdevs_operational": 3, 00:09:21.877 "base_bdevs_list": [ 00:09:21.877 { 00:09:21.877 "name": null, 00:09:21.877 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:21.877 "is_configured": false, 00:09:21.877 "data_offset": 0, 00:09:21.877 "data_size": 65536 00:09:21.877 }, 00:09:21.877 { 00:09:21.877 "name": "BaseBdev2", 00:09:21.877 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:21.877 "is_configured": true, 00:09:21.877 "data_offset": 0, 00:09:21.877 "data_size": 65536 00:09:21.877 }, 00:09:21.877 { 00:09:21.877 "name": "BaseBdev3", 00:09:21.877 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:21.877 "is_configured": true, 00:09:21.877 "data_offset": 0, 00:09:21.877 "data_size": 65536 00:09:21.877 } 00:09:21.877 ] 00:09:21.877 }' 00:09:21.877 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.877 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:22.136 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.136 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6d8487d2-cb5b-4167-bc53-1cc741da4d5e 00:09:22.136 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.136 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 [2024-11-18 10:37:48.058082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:22.396 [2024-11-18 10:37:48.058132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:22.396 [2024-11-18 10:37:48.058140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:22.396 [2024-11-18 10:37:48.058462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.396 [2024-11-18 10:37:48.058642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:22.396 [2024-11-18 10:37:48.058662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:22.396 [2024-11-18 10:37:48.058909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.396 NewBaseBdev 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 [ 00:09:22.396 { 00:09:22.396 "name": "NewBaseBdev", 00:09:22.396 "aliases": [ 00:09:22.396 "6d8487d2-cb5b-4167-bc53-1cc741da4d5e" 00:09:22.396 ], 00:09:22.396 "product_name": "Malloc disk", 00:09:22.396 "block_size": 512, 00:09:22.396 "num_blocks": 65536, 00:09:22.396 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:22.396 "assigned_rate_limits": { 00:09:22.396 "rw_ios_per_sec": 0, 00:09:22.396 "rw_mbytes_per_sec": 0, 00:09:22.396 "r_mbytes_per_sec": 0, 00:09:22.396 "w_mbytes_per_sec": 0 00:09:22.396 }, 00:09:22.396 "claimed": true, 00:09:22.396 "claim_type": "exclusive_write", 00:09:22.396 "zoned": false, 00:09:22.396 "supported_io_types": { 00:09:22.396 "read": true, 00:09:22.396 "write": true, 00:09:22.396 "unmap": true, 00:09:22.396 "flush": true, 00:09:22.396 "reset": true, 00:09:22.396 "nvme_admin": false, 00:09:22.396 "nvme_io": false, 00:09:22.396 "nvme_io_md": false, 00:09:22.396 "write_zeroes": true, 00:09:22.396 "zcopy": true, 00:09:22.396 "get_zone_info": false, 00:09:22.396 "zone_management": false, 00:09:22.396 "zone_append": false, 00:09:22.396 "compare": false, 00:09:22.396 "compare_and_write": false, 00:09:22.396 "abort": true, 00:09:22.396 "seek_hole": false, 00:09:22.396 "seek_data": false, 00:09:22.396 "copy": true, 00:09:22.396 "nvme_iov_md": false 00:09:22.396 }, 00:09:22.396 "memory_domains": [ 00:09:22.396 { 00:09:22.396 "dma_device_id": "system", 00:09:22.396 "dma_device_type": 1 00:09:22.396 }, 00:09:22.396 { 00:09:22.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.396 "dma_device_type": 2 00:09:22.396 } 00:09:22.396 ], 00:09:22.396 "driver_specific": {} 00:09:22.396 } 00:09:22.396 ] 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.396 "name": "Existed_Raid", 00:09:22.396 "uuid": "bf50d99b-b7cd-40a3-a61e-917b98d1253f", 00:09:22.396 "strip_size_kb": 0, 00:09:22.396 "state": "online", 00:09:22.396 "raid_level": "raid1", 00:09:22.396 "superblock": false, 00:09:22.396 "num_base_bdevs": 3, 00:09:22.396 "num_base_bdevs_discovered": 3, 00:09:22.396 "num_base_bdevs_operational": 3, 00:09:22.396 "base_bdevs_list": [ 00:09:22.396 { 00:09:22.396 "name": "NewBaseBdev", 00:09:22.396 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:22.396 "is_configured": true, 00:09:22.396 "data_offset": 0, 00:09:22.396 "data_size": 65536 00:09:22.396 }, 00:09:22.396 { 00:09:22.396 "name": "BaseBdev2", 00:09:22.396 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:22.396 "is_configured": true, 00:09:22.396 "data_offset": 0, 00:09:22.396 "data_size": 65536 00:09:22.396 }, 00:09:22.396 { 00:09:22.396 "name": "BaseBdev3", 00:09:22.396 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:22.396 "is_configured": true, 00:09:22.396 "data_offset": 0, 00:09:22.396 "data_size": 65536 00:09:22.396 } 00:09:22.396 ] 00:09:22.396 }' 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.396 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.655 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.655 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.656 [2024-11-18 10:37:48.505562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.656 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.656 "name": "Existed_Raid", 00:09:22.656 "aliases": [ 00:09:22.656 "bf50d99b-b7cd-40a3-a61e-917b98d1253f" 00:09:22.656 ], 00:09:22.656 "product_name": "Raid Volume", 00:09:22.656 "block_size": 512, 00:09:22.656 "num_blocks": 65536, 00:09:22.656 "uuid": "bf50d99b-b7cd-40a3-a61e-917b98d1253f", 00:09:22.656 "assigned_rate_limits": { 00:09:22.656 "rw_ios_per_sec": 0, 00:09:22.656 "rw_mbytes_per_sec": 0, 00:09:22.656 "r_mbytes_per_sec": 0, 00:09:22.656 "w_mbytes_per_sec": 0 00:09:22.656 }, 00:09:22.656 "claimed": false, 00:09:22.656 "zoned": false, 00:09:22.656 "supported_io_types": { 00:09:22.656 "read": true, 00:09:22.656 "write": true, 00:09:22.656 "unmap": false, 00:09:22.656 "flush": false, 00:09:22.656 "reset": true, 00:09:22.656 "nvme_admin": false, 00:09:22.656 "nvme_io": false, 00:09:22.656 "nvme_io_md": false, 00:09:22.656 "write_zeroes": true, 00:09:22.656 "zcopy": false, 00:09:22.656 "get_zone_info": false, 00:09:22.656 "zone_management": false, 00:09:22.656 "zone_append": false, 00:09:22.656 "compare": false, 00:09:22.656 "compare_and_write": false, 00:09:22.656 "abort": false, 00:09:22.656 "seek_hole": false, 00:09:22.656 "seek_data": false, 00:09:22.656 "copy": false, 00:09:22.656 "nvme_iov_md": false 00:09:22.656 }, 00:09:22.656 "memory_domains": [ 00:09:22.656 { 00:09:22.656 "dma_device_id": "system", 00:09:22.656 "dma_device_type": 1 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.656 "dma_device_type": 2 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "dma_device_id": "system", 00:09:22.656 "dma_device_type": 1 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.656 "dma_device_type": 2 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "dma_device_id": "system", 00:09:22.656 "dma_device_type": 1 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.656 "dma_device_type": 2 00:09:22.656 } 00:09:22.656 ], 00:09:22.656 "driver_specific": { 00:09:22.656 "raid": { 00:09:22.656 "uuid": "bf50d99b-b7cd-40a3-a61e-917b98d1253f", 00:09:22.656 "strip_size_kb": 0, 00:09:22.656 "state": "online", 00:09:22.656 "raid_level": "raid1", 00:09:22.656 "superblock": false, 00:09:22.656 "num_base_bdevs": 3, 00:09:22.656 "num_base_bdevs_discovered": 3, 00:09:22.656 "num_base_bdevs_operational": 3, 00:09:22.656 "base_bdevs_list": [ 00:09:22.656 { 00:09:22.656 "name": "NewBaseBdev", 00:09:22.656 "uuid": "6d8487d2-cb5b-4167-bc53-1cc741da4d5e", 00:09:22.656 "is_configured": true, 00:09:22.656 "data_offset": 0, 00:09:22.656 "data_size": 65536 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "name": "BaseBdev2", 00:09:22.656 "uuid": "060a7f57-82d0-492f-bc53-ac9832cda77f", 00:09:22.656 "is_configured": true, 00:09:22.656 "data_offset": 0, 00:09:22.656 "data_size": 65536 00:09:22.656 }, 00:09:22.656 { 00:09:22.656 "name": "BaseBdev3", 00:09:22.656 "uuid": "4bdd1ee9-5534-4bdb-8c8f-22358b627da4", 00:09:22.656 "is_configured": true, 00:09:22.656 "data_offset": 0, 00:09:22.656 "data_size": 65536 00:09:22.656 } 00:09:22.656 ] 00:09:22.656 } 00:09:22.656 } 00:09:22.656 }' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:22.915 BaseBdev2 00:09:22.915 BaseBdev3' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.915 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.916 [2024-11-18 10:37:48.780842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.916 [2024-11-18 10:37:48.780909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.916 [2024-11-18 10:37:48.780979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.916 [2024-11-18 10:37:48.781286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.916 [2024-11-18 10:37:48.781297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67267 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67267 ']' 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67267 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.916 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67267 00:09:23.175 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.175 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.175 killing process with pid 67267 00:09:23.175 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67267' 00:09:23.175 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67267 00:09:23.175 [2024-11-18 10:37:48.828850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.175 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67267 00:09:23.455 [2024-11-18 10:37:49.140040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.842 ************************************ 00:09:24.842 END TEST raid_state_function_test 00:09:24.842 ************************************ 00:09:24.842 10:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:24.842 00:09:24.842 real 0m10.574s 00:09:24.842 user 0m16.576s 00:09:24.842 sys 0m1.959s 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.843 10:37:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:24.843 10:37:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:24.843 10:37:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.843 10:37:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.843 ************************************ 00:09:24.843 START TEST raid_state_function_test_sb 00:09:24.843 ************************************ 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67894 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67894' 00:09:24.843 Process raid pid: 67894 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67894 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67894 ']' 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.843 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.843 [2024-11-18 10:37:50.462019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:24.843 [2024-11-18 10:37:50.462240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.843 [2024-11-18 10:37:50.636120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.103 [2024-11-18 10:37:50.762994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.362 [2024-11-18 10:37:50.991871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.362 [2024-11-18 10:37:50.992017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.662 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 [2024-11-18 10:37:51.280544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.663 [2024-11-18 10:37:51.280598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.663 [2024-11-18 10:37:51.280608] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.663 [2024-11-18 10:37:51.280618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.663 [2024-11-18 10:37:51.280624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.663 [2024-11-18 10:37:51.280633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.663 "name": "Existed_Raid", 00:09:25.663 "uuid": "e070b427-176e-43b4-a8a5-8be32bc0c202", 00:09:25.663 "strip_size_kb": 0, 00:09:25.663 "state": "configuring", 00:09:25.663 "raid_level": "raid1", 00:09:25.663 "superblock": true, 00:09:25.663 "num_base_bdevs": 3, 00:09:25.663 "num_base_bdevs_discovered": 0, 00:09:25.663 "num_base_bdevs_operational": 3, 00:09:25.663 "base_bdevs_list": [ 00:09:25.663 { 00:09:25.663 "name": "BaseBdev1", 00:09:25.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.663 "is_configured": false, 00:09:25.663 "data_offset": 0, 00:09:25.663 "data_size": 0 00:09:25.663 }, 00:09:25.663 { 00:09:25.663 "name": "BaseBdev2", 00:09:25.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.663 "is_configured": false, 00:09:25.663 "data_offset": 0, 00:09:25.663 "data_size": 0 00:09:25.663 }, 00:09:25.663 { 00:09:25.663 "name": "BaseBdev3", 00:09:25.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.663 "is_configured": false, 00:09:25.663 "data_offset": 0, 00:09:25.663 "data_size": 0 00:09:25.663 } 00:09:25.663 ] 00:09:25.663 }' 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.663 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.943 [2024-11-18 10:37:51.683790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.943 [2024-11-18 10:37:51.683863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.943 [2024-11-18 10:37:51.695773] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.943 [2024-11-18 10:37:51.695814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.943 [2024-11-18 10:37:51.695824] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.943 [2024-11-18 10:37:51.695833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.943 [2024-11-18 10:37:51.695839] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.943 [2024-11-18 10:37:51.695848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.943 [2024-11-18 10:37:51.748818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.943 BaseBdev1 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.943 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.943 [ 00:09:25.943 { 00:09:25.943 "name": "BaseBdev1", 00:09:25.943 "aliases": [ 00:09:25.943 "2e1d14f7-d210-440d-87ec-1644292db229" 00:09:25.943 ], 00:09:25.944 "product_name": "Malloc disk", 00:09:25.944 "block_size": 512, 00:09:25.944 "num_blocks": 65536, 00:09:25.944 "uuid": "2e1d14f7-d210-440d-87ec-1644292db229", 00:09:25.944 "assigned_rate_limits": { 00:09:25.944 "rw_ios_per_sec": 0, 00:09:25.944 "rw_mbytes_per_sec": 0, 00:09:25.944 "r_mbytes_per_sec": 0, 00:09:25.944 "w_mbytes_per_sec": 0 00:09:25.944 }, 00:09:25.944 "claimed": true, 00:09:25.944 "claim_type": "exclusive_write", 00:09:25.944 "zoned": false, 00:09:25.944 "supported_io_types": { 00:09:25.944 "read": true, 00:09:25.944 "write": true, 00:09:25.944 "unmap": true, 00:09:25.944 "flush": true, 00:09:25.944 "reset": true, 00:09:25.944 "nvme_admin": false, 00:09:25.944 "nvme_io": false, 00:09:25.944 "nvme_io_md": false, 00:09:25.944 "write_zeroes": true, 00:09:25.944 "zcopy": true, 00:09:25.944 "get_zone_info": false, 00:09:25.944 "zone_management": false, 00:09:25.944 "zone_append": false, 00:09:25.944 "compare": false, 00:09:25.944 "compare_and_write": false, 00:09:25.944 "abort": true, 00:09:25.944 "seek_hole": false, 00:09:25.944 "seek_data": false, 00:09:25.944 "copy": true, 00:09:25.944 "nvme_iov_md": false 00:09:25.944 }, 00:09:25.944 "memory_domains": [ 00:09:25.944 { 00:09:25.944 "dma_device_id": "system", 00:09:25.944 "dma_device_type": 1 00:09:25.944 }, 00:09:25.944 { 00:09:25.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.944 "dma_device_type": 2 00:09:25.944 } 00:09:25.944 ], 00:09:25.944 "driver_specific": {} 00:09:25.944 } 00:09:25.944 ] 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.204 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.204 "name": "Existed_Raid", 00:09:26.204 "uuid": "e424316a-2793-4414-870f-f384bd4611d6", 00:09:26.204 "strip_size_kb": 0, 00:09:26.204 "state": "configuring", 00:09:26.204 "raid_level": "raid1", 00:09:26.204 "superblock": true, 00:09:26.204 "num_base_bdevs": 3, 00:09:26.204 "num_base_bdevs_discovered": 1, 00:09:26.204 "num_base_bdevs_operational": 3, 00:09:26.204 "base_bdevs_list": [ 00:09:26.204 { 00:09:26.204 "name": "BaseBdev1", 00:09:26.204 "uuid": "2e1d14f7-d210-440d-87ec-1644292db229", 00:09:26.204 "is_configured": true, 00:09:26.204 "data_offset": 2048, 00:09:26.204 "data_size": 63488 00:09:26.204 }, 00:09:26.204 { 00:09:26.204 "name": "BaseBdev2", 00:09:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.204 "is_configured": false, 00:09:26.204 "data_offset": 0, 00:09:26.204 "data_size": 0 00:09:26.204 }, 00:09:26.204 { 00:09:26.204 "name": "BaseBdev3", 00:09:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.204 "is_configured": false, 00:09:26.204 "data_offset": 0, 00:09:26.204 "data_size": 0 00:09:26.204 } 00:09:26.204 ] 00:09:26.204 }' 00:09:26.204 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.204 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.464 [2024-11-18 10:37:52.247991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.464 [2024-11-18 10:37:52.248093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.464 [2024-11-18 10:37:52.260033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.464 [2024-11-18 10:37:52.262107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.464 [2024-11-18 10:37:52.262218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.464 [2024-11-18 10:37:52.262252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.464 [2024-11-18 10:37:52.262276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.464 "name": "Existed_Raid", 00:09:26.464 "uuid": "ed30b066-b733-4f5f-ace4-f279f45656bf", 00:09:26.464 "strip_size_kb": 0, 00:09:26.464 "state": "configuring", 00:09:26.464 "raid_level": "raid1", 00:09:26.464 "superblock": true, 00:09:26.464 "num_base_bdevs": 3, 00:09:26.464 "num_base_bdevs_discovered": 1, 00:09:26.464 "num_base_bdevs_operational": 3, 00:09:26.464 "base_bdevs_list": [ 00:09:26.464 { 00:09:26.464 "name": "BaseBdev1", 00:09:26.464 "uuid": "2e1d14f7-d210-440d-87ec-1644292db229", 00:09:26.464 "is_configured": true, 00:09:26.464 "data_offset": 2048, 00:09:26.464 "data_size": 63488 00:09:26.464 }, 00:09:26.464 { 00:09:26.464 "name": "BaseBdev2", 00:09:26.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.464 "is_configured": false, 00:09:26.464 "data_offset": 0, 00:09:26.464 "data_size": 0 00:09:26.464 }, 00:09:26.464 { 00:09:26.464 "name": "BaseBdev3", 00:09:26.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.464 "is_configured": false, 00:09:26.464 "data_offset": 0, 00:09:26.464 "data_size": 0 00:09:26.464 } 00:09:26.464 ] 00:09:26.464 }' 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.464 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 [2024-11-18 10:37:52.794599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.034 BaseBdev2 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 [ 00:09:27.034 { 00:09:27.034 "name": "BaseBdev2", 00:09:27.034 "aliases": [ 00:09:27.034 "f932f8c9-0862-4192-ad35-390d5e26fab3" 00:09:27.034 ], 00:09:27.034 "product_name": "Malloc disk", 00:09:27.034 "block_size": 512, 00:09:27.034 "num_blocks": 65536, 00:09:27.034 "uuid": "f932f8c9-0862-4192-ad35-390d5e26fab3", 00:09:27.034 "assigned_rate_limits": { 00:09:27.034 "rw_ios_per_sec": 0, 00:09:27.034 "rw_mbytes_per_sec": 0, 00:09:27.034 "r_mbytes_per_sec": 0, 00:09:27.034 "w_mbytes_per_sec": 0 00:09:27.034 }, 00:09:27.034 "claimed": true, 00:09:27.034 "claim_type": "exclusive_write", 00:09:27.034 "zoned": false, 00:09:27.034 "supported_io_types": { 00:09:27.034 "read": true, 00:09:27.034 "write": true, 00:09:27.034 "unmap": true, 00:09:27.034 "flush": true, 00:09:27.034 "reset": true, 00:09:27.034 "nvme_admin": false, 00:09:27.034 "nvme_io": false, 00:09:27.034 "nvme_io_md": false, 00:09:27.034 "write_zeroes": true, 00:09:27.034 "zcopy": true, 00:09:27.034 "get_zone_info": false, 00:09:27.034 "zone_management": false, 00:09:27.034 "zone_append": false, 00:09:27.034 "compare": false, 00:09:27.034 "compare_and_write": false, 00:09:27.034 "abort": true, 00:09:27.034 "seek_hole": false, 00:09:27.034 "seek_data": false, 00:09:27.034 "copy": true, 00:09:27.034 "nvme_iov_md": false 00:09:27.034 }, 00:09:27.034 "memory_domains": [ 00:09:27.034 { 00:09:27.034 "dma_device_id": "system", 00:09:27.034 "dma_device_type": 1 00:09:27.034 }, 00:09:27.034 { 00:09:27.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.034 "dma_device_type": 2 00:09:27.034 } 00:09:27.034 ], 00:09:27.034 "driver_specific": {} 00:09:27.034 } 00:09:27.034 ] 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.034 "name": "Existed_Raid", 00:09:27.034 "uuid": "ed30b066-b733-4f5f-ace4-f279f45656bf", 00:09:27.034 "strip_size_kb": 0, 00:09:27.034 "state": "configuring", 00:09:27.034 "raid_level": "raid1", 00:09:27.034 "superblock": true, 00:09:27.034 "num_base_bdevs": 3, 00:09:27.034 "num_base_bdevs_discovered": 2, 00:09:27.034 "num_base_bdevs_operational": 3, 00:09:27.034 "base_bdevs_list": [ 00:09:27.034 { 00:09:27.034 "name": "BaseBdev1", 00:09:27.034 "uuid": "2e1d14f7-d210-440d-87ec-1644292db229", 00:09:27.034 "is_configured": true, 00:09:27.034 "data_offset": 2048, 00:09:27.034 "data_size": 63488 00:09:27.034 }, 00:09:27.034 { 00:09:27.034 "name": "BaseBdev2", 00:09:27.034 "uuid": "f932f8c9-0862-4192-ad35-390d5e26fab3", 00:09:27.034 "is_configured": true, 00:09:27.034 "data_offset": 2048, 00:09:27.034 "data_size": 63488 00:09:27.034 }, 00:09:27.034 { 00:09:27.034 "name": "BaseBdev3", 00:09:27.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.034 "is_configured": false, 00:09:27.034 "data_offset": 0, 00:09:27.034 "data_size": 0 00:09:27.034 } 00:09:27.034 ] 00:09:27.034 }' 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.034 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.604 [2024-11-18 10:37:53.352101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.604 [2024-11-18 10:37:53.352504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.604 [2024-11-18 10:37:53.352535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:27.604 [2024-11-18 10:37:53.352832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:27.604 [2024-11-18 10:37:53.352996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.604 [2024-11-18 10:37:53.353004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:27.604 BaseBdev3 00:09:27.604 [2024-11-18 10:37:53.353155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.604 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.604 [ 00:09:27.604 { 00:09:27.604 "name": "BaseBdev3", 00:09:27.604 "aliases": [ 00:09:27.604 "f8f3645b-100c-4546-960d-c39766a00b1f" 00:09:27.604 ], 00:09:27.604 "product_name": "Malloc disk", 00:09:27.604 "block_size": 512, 00:09:27.604 "num_blocks": 65536, 00:09:27.604 "uuid": "f8f3645b-100c-4546-960d-c39766a00b1f", 00:09:27.604 "assigned_rate_limits": { 00:09:27.604 "rw_ios_per_sec": 0, 00:09:27.604 "rw_mbytes_per_sec": 0, 00:09:27.604 "r_mbytes_per_sec": 0, 00:09:27.604 "w_mbytes_per_sec": 0 00:09:27.604 }, 00:09:27.604 "claimed": true, 00:09:27.604 "claim_type": "exclusive_write", 00:09:27.605 "zoned": false, 00:09:27.605 "supported_io_types": { 00:09:27.605 "read": true, 00:09:27.605 "write": true, 00:09:27.605 "unmap": true, 00:09:27.605 "flush": true, 00:09:27.605 "reset": true, 00:09:27.605 "nvme_admin": false, 00:09:27.605 "nvme_io": false, 00:09:27.605 "nvme_io_md": false, 00:09:27.605 "write_zeroes": true, 00:09:27.605 "zcopy": true, 00:09:27.605 "get_zone_info": false, 00:09:27.605 "zone_management": false, 00:09:27.605 "zone_append": false, 00:09:27.605 "compare": false, 00:09:27.605 "compare_and_write": false, 00:09:27.605 "abort": true, 00:09:27.605 "seek_hole": false, 00:09:27.605 "seek_data": false, 00:09:27.605 "copy": true, 00:09:27.605 "nvme_iov_md": false 00:09:27.605 }, 00:09:27.605 "memory_domains": [ 00:09:27.605 { 00:09:27.605 "dma_device_id": "system", 00:09:27.605 "dma_device_type": 1 00:09:27.605 }, 00:09:27.605 { 00:09:27.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.605 "dma_device_type": 2 00:09:27.605 } 00:09:27.605 ], 00:09:27.605 "driver_specific": {} 00:09:27.605 } 00:09:27.605 ] 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.605 "name": "Existed_Raid", 00:09:27.605 "uuid": "ed30b066-b733-4f5f-ace4-f279f45656bf", 00:09:27.605 "strip_size_kb": 0, 00:09:27.605 "state": "online", 00:09:27.605 "raid_level": "raid1", 00:09:27.605 "superblock": true, 00:09:27.605 "num_base_bdevs": 3, 00:09:27.605 "num_base_bdevs_discovered": 3, 00:09:27.605 "num_base_bdevs_operational": 3, 00:09:27.605 "base_bdevs_list": [ 00:09:27.605 { 00:09:27.605 "name": "BaseBdev1", 00:09:27.605 "uuid": "2e1d14f7-d210-440d-87ec-1644292db229", 00:09:27.605 "is_configured": true, 00:09:27.605 "data_offset": 2048, 00:09:27.605 "data_size": 63488 00:09:27.605 }, 00:09:27.605 { 00:09:27.605 "name": "BaseBdev2", 00:09:27.605 "uuid": "f932f8c9-0862-4192-ad35-390d5e26fab3", 00:09:27.605 "is_configured": true, 00:09:27.605 "data_offset": 2048, 00:09:27.605 "data_size": 63488 00:09:27.605 }, 00:09:27.605 { 00:09:27.605 "name": "BaseBdev3", 00:09:27.605 "uuid": "f8f3645b-100c-4546-960d-c39766a00b1f", 00:09:27.605 "is_configured": true, 00:09:27.605 "data_offset": 2048, 00:09:27.605 "data_size": 63488 00:09:27.605 } 00:09:27.605 ] 00:09:27.605 }' 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.605 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.175 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.175 [2024-11-18 10:37:53.839590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.176 "name": "Existed_Raid", 00:09:28.176 "aliases": [ 00:09:28.176 "ed30b066-b733-4f5f-ace4-f279f45656bf" 00:09:28.176 ], 00:09:28.176 "product_name": "Raid Volume", 00:09:28.176 "block_size": 512, 00:09:28.176 "num_blocks": 63488, 00:09:28.176 "uuid": "ed30b066-b733-4f5f-ace4-f279f45656bf", 00:09:28.176 "assigned_rate_limits": { 00:09:28.176 "rw_ios_per_sec": 0, 00:09:28.176 "rw_mbytes_per_sec": 0, 00:09:28.176 "r_mbytes_per_sec": 0, 00:09:28.176 "w_mbytes_per_sec": 0 00:09:28.176 }, 00:09:28.176 "claimed": false, 00:09:28.176 "zoned": false, 00:09:28.176 "supported_io_types": { 00:09:28.176 "read": true, 00:09:28.176 "write": true, 00:09:28.176 "unmap": false, 00:09:28.176 "flush": false, 00:09:28.176 "reset": true, 00:09:28.176 "nvme_admin": false, 00:09:28.176 "nvme_io": false, 00:09:28.176 "nvme_io_md": false, 00:09:28.176 "write_zeroes": true, 00:09:28.176 "zcopy": false, 00:09:28.176 "get_zone_info": false, 00:09:28.176 "zone_management": false, 00:09:28.176 "zone_append": false, 00:09:28.176 "compare": false, 00:09:28.176 "compare_and_write": false, 00:09:28.176 "abort": false, 00:09:28.176 "seek_hole": false, 00:09:28.176 "seek_data": false, 00:09:28.176 "copy": false, 00:09:28.176 "nvme_iov_md": false 00:09:28.176 }, 00:09:28.176 "memory_domains": [ 00:09:28.176 { 00:09:28.176 "dma_device_id": "system", 00:09:28.176 "dma_device_type": 1 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.176 "dma_device_type": 2 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "system", 00:09:28.176 "dma_device_type": 1 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.176 "dma_device_type": 2 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "system", 00:09:28.176 "dma_device_type": 1 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.176 "dma_device_type": 2 00:09:28.176 } 00:09:28.176 ], 00:09:28.176 "driver_specific": { 00:09:28.176 "raid": { 00:09:28.176 "uuid": "ed30b066-b733-4f5f-ace4-f279f45656bf", 00:09:28.176 "strip_size_kb": 0, 00:09:28.176 "state": "online", 00:09:28.176 "raid_level": "raid1", 00:09:28.176 "superblock": true, 00:09:28.176 "num_base_bdevs": 3, 00:09:28.176 "num_base_bdevs_discovered": 3, 00:09:28.176 "num_base_bdevs_operational": 3, 00:09:28.176 "base_bdevs_list": [ 00:09:28.176 { 00:09:28.176 "name": "BaseBdev1", 00:09:28.176 "uuid": "2e1d14f7-d210-440d-87ec-1644292db229", 00:09:28.176 "is_configured": true, 00:09:28.176 "data_offset": 2048, 00:09:28.176 "data_size": 63488 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "name": "BaseBdev2", 00:09:28.176 "uuid": "f932f8c9-0862-4192-ad35-390d5e26fab3", 00:09:28.176 "is_configured": true, 00:09:28.176 "data_offset": 2048, 00:09:28.176 "data_size": 63488 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "name": "BaseBdev3", 00:09:28.176 "uuid": "f8f3645b-100c-4546-960d-c39766a00b1f", 00:09:28.176 "is_configured": true, 00:09:28.176 "data_offset": 2048, 00:09:28.176 "data_size": 63488 00:09:28.176 } 00:09:28.176 ] 00:09:28.176 } 00:09:28.176 } 00:09:28.176 }' 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:28.176 BaseBdev2 00:09:28.176 BaseBdev3' 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.176 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.176 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.437 [2024-11-18 10:37:54.138878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.437 "name": "Existed_Raid", 00:09:28.437 "uuid": "ed30b066-b733-4f5f-ace4-f279f45656bf", 00:09:28.437 "strip_size_kb": 0, 00:09:28.437 "state": "online", 00:09:28.437 "raid_level": "raid1", 00:09:28.437 "superblock": true, 00:09:28.437 "num_base_bdevs": 3, 00:09:28.437 "num_base_bdevs_discovered": 2, 00:09:28.437 "num_base_bdevs_operational": 2, 00:09:28.437 "base_bdevs_list": [ 00:09:28.437 { 00:09:28.437 "name": null, 00:09:28.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.437 "is_configured": false, 00:09:28.437 "data_offset": 0, 00:09:28.437 "data_size": 63488 00:09:28.437 }, 00:09:28.437 { 00:09:28.437 "name": "BaseBdev2", 00:09:28.437 "uuid": "f932f8c9-0862-4192-ad35-390d5e26fab3", 00:09:28.437 "is_configured": true, 00:09:28.437 "data_offset": 2048, 00:09:28.437 "data_size": 63488 00:09:28.437 }, 00:09:28.437 { 00:09:28.437 "name": "BaseBdev3", 00:09:28.437 "uuid": "f8f3645b-100c-4546-960d-c39766a00b1f", 00:09:28.437 "is_configured": true, 00:09:28.437 "data_offset": 2048, 00:09:28.437 "data_size": 63488 00:09:28.437 } 00:09:28.437 ] 00:09:28.437 }' 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.437 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.006 [2024-11-18 10:37:54.700957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.006 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.006 [2024-11-18 10:37:54.856900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.006 [2024-11-18 10:37:54.857026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.265 [2024-11-18 10:37:54.959322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.265 [2024-11-18 10:37:54.959463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.265 [2024-11-18 10:37:54.959512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.265 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.265 BaseBdev2 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.265 [ 00:09:29.265 { 00:09:29.265 "name": "BaseBdev2", 00:09:29.265 "aliases": [ 00:09:29.265 "057bc2dc-3d5e-4b36-ad07-290dbf081171" 00:09:29.265 ], 00:09:29.265 "product_name": "Malloc disk", 00:09:29.265 "block_size": 512, 00:09:29.265 "num_blocks": 65536, 00:09:29.265 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:29.265 "assigned_rate_limits": { 00:09:29.265 "rw_ios_per_sec": 0, 00:09:29.265 "rw_mbytes_per_sec": 0, 00:09:29.265 "r_mbytes_per_sec": 0, 00:09:29.265 "w_mbytes_per_sec": 0 00:09:29.265 }, 00:09:29.265 "claimed": false, 00:09:29.265 "zoned": false, 00:09:29.265 "supported_io_types": { 00:09:29.265 "read": true, 00:09:29.265 "write": true, 00:09:29.265 "unmap": true, 00:09:29.265 "flush": true, 00:09:29.265 "reset": true, 00:09:29.265 "nvme_admin": false, 00:09:29.265 "nvme_io": false, 00:09:29.265 "nvme_io_md": false, 00:09:29.265 "write_zeroes": true, 00:09:29.265 "zcopy": true, 00:09:29.265 "get_zone_info": false, 00:09:29.265 "zone_management": false, 00:09:29.265 "zone_append": false, 00:09:29.265 "compare": false, 00:09:29.265 "compare_and_write": false, 00:09:29.265 "abort": true, 00:09:29.265 "seek_hole": false, 00:09:29.265 "seek_data": false, 00:09:29.265 "copy": true, 00:09:29.265 "nvme_iov_md": false 00:09:29.265 }, 00:09:29.265 "memory_domains": [ 00:09:29.265 { 00:09:29.265 "dma_device_id": "system", 00:09:29.265 "dma_device_type": 1 00:09:29.265 }, 00:09:29.265 { 00:09:29.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.265 "dma_device_type": 2 00:09:29.265 } 00:09:29.265 ], 00:09:29.265 "driver_specific": {} 00:09:29.265 } 00:09:29.265 ] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.265 BaseBdev3 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:29.265 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.266 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.524 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.524 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.524 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.524 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.524 [ 00:09:29.524 { 00:09:29.524 "name": "BaseBdev3", 00:09:29.524 "aliases": [ 00:09:29.524 "4ddb0bcf-22d1-49db-9373-760c037c91b4" 00:09:29.524 ], 00:09:29.524 "product_name": "Malloc disk", 00:09:29.524 "block_size": 512, 00:09:29.524 "num_blocks": 65536, 00:09:29.524 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:29.524 "assigned_rate_limits": { 00:09:29.524 "rw_ios_per_sec": 0, 00:09:29.524 "rw_mbytes_per_sec": 0, 00:09:29.525 "r_mbytes_per_sec": 0, 00:09:29.525 "w_mbytes_per_sec": 0 00:09:29.525 }, 00:09:29.525 "claimed": false, 00:09:29.525 "zoned": false, 00:09:29.525 "supported_io_types": { 00:09:29.525 "read": true, 00:09:29.525 "write": true, 00:09:29.525 "unmap": true, 00:09:29.525 "flush": true, 00:09:29.525 "reset": true, 00:09:29.525 "nvme_admin": false, 00:09:29.525 "nvme_io": false, 00:09:29.525 "nvme_io_md": false, 00:09:29.525 "write_zeroes": true, 00:09:29.525 "zcopy": true, 00:09:29.525 "get_zone_info": false, 00:09:29.525 "zone_management": false, 00:09:29.525 "zone_append": false, 00:09:29.525 "compare": false, 00:09:29.525 "compare_and_write": false, 00:09:29.525 "abort": true, 00:09:29.525 "seek_hole": false, 00:09:29.525 "seek_data": false, 00:09:29.525 "copy": true, 00:09:29.525 "nvme_iov_md": false 00:09:29.525 }, 00:09:29.525 "memory_domains": [ 00:09:29.525 { 00:09:29.525 "dma_device_id": "system", 00:09:29.525 "dma_device_type": 1 00:09:29.525 }, 00:09:29.525 { 00:09:29.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.525 "dma_device_type": 2 00:09:29.525 } 00:09:29.525 ], 00:09:29.525 "driver_specific": {} 00:09:29.525 } 00:09:29.525 ] 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.525 [2024-11-18 10:37:55.190087] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.525 [2024-11-18 10:37:55.190184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.525 [2024-11-18 10:37:55.190228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.525 [2024-11-18 10:37:55.192296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.525 "name": "Existed_Raid", 00:09:29.525 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:29.525 "strip_size_kb": 0, 00:09:29.525 "state": "configuring", 00:09:29.525 "raid_level": "raid1", 00:09:29.525 "superblock": true, 00:09:29.525 "num_base_bdevs": 3, 00:09:29.525 "num_base_bdevs_discovered": 2, 00:09:29.525 "num_base_bdevs_operational": 3, 00:09:29.525 "base_bdevs_list": [ 00:09:29.525 { 00:09:29.525 "name": "BaseBdev1", 00:09:29.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.525 "is_configured": false, 00:09:29.525 "data_offset": 0, 00:09:29.525 "data_size": 0 00:09:29.525 }, 00:09:29.525 { 00:09:29.525 "name": "BaseBdev2", 00:09:29.525 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:29.525 "is_configured": true, 00:09:29.525 "data_offset": 2048, 00:09:29.525 "data_size": 63488 00:09:29.525 }, 00:09:29.525 { 00:09:29.525 "name": "BaseBdev3", 00:09:29.525 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:29.525 "is_configured": true, 00:09:29.525 "data_offset": 2048, 00:09:29.525 "data_size": 63488 00:09:29.525 } 00:09:29.525 ] 00:09:29.525 }' 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.525 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.785 [2024-11-18 10:37:55.589368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.785 "name": "Existed_Raid", 00:09:29.785 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:29.785 "strip_size_kb": 0, 00:09:29.785 "state": "configuring", 00:09:29.785 "raid_level": "raid1", 00:09:29.785 "superblock": true, 00:09:29.785 "num_base_bdevs": 3, 00:09:29.785 "num_base_bdevs_discovered": 1, 00:09:29.785 "num_base_bdevs_operational": 3, 00:09:29.785 "base_bdevs_list": [ 00:09:29.785 { 00:09:29.785 "name": "BaseBdev1", 00:09:29.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.785 "is_configured": false, 00:09:29.785 "data_offset": 0, 00:09:29.785 "data_size": 0 00:09:29.785 }, 00:09:29.785 { 00:09:29.785 "name": null, 00:09:29.785 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:29.785 "is_configured": false, 00:09:29.785 "data_offset": 0, 00:09:29.785 "data_size": 63488 00:09:29.785 }, 00:09:29.785 { 00:09:29.785 "name": "BaseBdev3", 00:09:29.785 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:29.785 "is_configured": true, 00:09:29.785 "data_offset": 2048, 00:09:29.785 "data_size": 63488 00:09:29.785 } 00:09:29.785 ] 00:09:29.785 }' 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.785 10:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.355 [2024-11-18 10:37:56.148562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.355 BaseBdev1 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.355 [ 00:09:30.355 { 00:09:30.355 "name": "BaseBdev1", 00:09:30.355 "aliases": [ 00:09:30.355 "68efcc2b-337e-4d80-a05a-f8a0e8d32112" 00:09:30.355 ], 00:09:30.355 "product_name": "Malloc disk", 00:09:30.355 "block_size": 512, 00:09:30.355 "num_blocks": 65536, 00:09:30.355 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:30.355 "assigned_rate_limits": { 00:09:30.355 "rw_ios_per_sec": 0, 00:09:30.355 "rw_mbytes_per_sec": 0, 00:09:30.355 "r_mbytes_per_sec": 0, 00:09:30.355 "w_mbytes_per_sec": 0 00:09:30.355 }, 00:09:30.355 "claimed": true, 00:09:30.355 "claim_type": "exclusive_write", 00:09:30.355 "zoned": false, 00:09:30.355 "supported_io_types": { 00:09:30.355 "read": true, 00:09:30.355 "write": true, 00:09:30.355 "unmap": true, 00:09:30.355 "flush": true, 00:09:30.355 "reset": true, 00:09:30.355 "nvme_admin": false, 00:09:30.355 "nvme_io": false, 00:09:30.355 "nvme_io_md": false, 00:09:30.355 "write_zeroes": true, 00:09:30.355 "zcopy": true, 00:09:30.355 "get_zone_info": false, 00:09:30.355 "zone_management": false, 00:09:30.355 "zone_append": false, 00:09:30.355 "compare": false, 00:09:30.355 "compare_and_write": false, 00:09:30.355 "abort": true, 00:09:30.355 "seek_hole": false, 00:09:30.355 "seek_data": false, 00:09:30.355 "copy": true, 00:09:30.355 "nvme_iov_md": false 00:09:30.355 }, 00:09:30.355 "memory_domains": [ 00:09:30.355 { 00:09:30.355 "dma_device_id": "system", 00:09:30.355 "dma_device_type": 1 00:09:30.355 }, 00:09:30.355 { 00:09:30.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.355 "dma_device_type": 2 00:09:30.355 } 00:09:30.355 ], 00:09:30.355 "driver_specific": {} 00:09:30.355 } 00:09:30.355 ] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.355 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.615 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.615 "name": "Existed_Raid", 00:09:30.615 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:30.615 "strip_size_kb": 0, 00:09:30.615 "state": "configuring", 00:09:30.615 "raid_level": "raid1", 00:09:30.615 "superblock": true, 00:09:30.615 "num_base_bdevs": 3, 00:09:30.615 "num_base_bdevs_discovered": 2, 00:09:30.615 "num_base_bdevs_operational": 3, 00:09:30.615 "base_bdevs_list": [ 00:09:30.615 { 00:09:30.615 "name": "BaseBdev1", 00:09:30.615 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:30.615 "is_configured": true, 00:09:30.615 "data_offset": 2048, 00:09:30.615 "data_size": 63488 00:09:30.615 }, 00:09:30.615 { 00:09:30.615 "name": null, 00:09:30.615 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:30.615 "is_configured": false, 00:09:30.615 "data_offset": 0, 00:09:30.615 "data_size": 63488 00:09:30.615 }, 00:09:30.615 { 00:09:30.615 "name": "BaseBdev3", 00:09:30.615 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:30.615 "is_configured": true, 00:09:30.615 "data_offset": 2048, 00:09:30.615 "data_size": 63488 00:09:30.615 } 00:09:30.615 ] 00:09:30.615 }' 00:09:30.615 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.615 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.875 [2024-11-18 10:37:56.687660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.875 "name": "Existed_Raid", 00:09:30.875 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:30.875 "strip_size_kb": 0, 00:09:30.875 "state": "configuring", 00:09:30.875 "raid_level": "raid1", 00:09:30.875 "superblock": true, 00:09:30.875 "num_base_bdevs": 3, 00:09:30.875 "num_base_bdevs_discovered": 1, 00:09:30.875 "num_base_bdevs_operational": 3, 00:09:30.875 "base_bdevs_list": [ 00:09:30.875 { 00:09:30.875 "name": "BaseBdev1", 00:09:30.875 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:30.875 "is_configured": true, 00:09:30.875 "data_offset": 2048, 00:09:30.875 "data_size": 63488 00:09:30.875 }, 00:09:30.875 { 00:09:30.875 "name": null, 00:09:30.875 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:30.875 "is_configured": false, 00:09:30.875 "data_offset": 0, 00:09:30.875 "data_size": 63488 00:09:30.875 }, 00:09:30.875 { 00:09:30.875 "name": null, 00:09:30.875 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:30.875 "is_configured": false, 00:09:30.875 "data_offset": 0, 00:09:30.875 "data_size": 63488 00:09:30.875 } 00:09:30.875 ] 00:09:30.875 }' 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.875 10:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.444 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.444 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.444 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.444 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.445 [2024-11-18 10:37:57.138960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.445 "name": "Existed_Raid", 00:09:31.445 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:31.445 "strip_size_kb": 0, 00:09:31.445 "state": "configuring", 00:09:31.445 "raid_level": "raid1", 00:09:31.445 "superblock": true, 00:09:31.445 "num_base_bdevs": 3, 00:09:31.445 "num_base_bdevs_discovered": 2, 00:09:31.445 "num_base_bdevs_operational": 3, 00:09:31.445 "base_bdevs_list": [ 00:09:31.445 { 00:09:31.445 "name": "BaseBdev1", 00:09:31.445 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:31.445 "is_configured": true, 00:09:31.445 "data_offset": 2048, 00:09:31.445 "data_size": 63488 00:09:31.445 }, 00:09:31.445 { 00:09:31.445 "name": null, 00:09:31.445 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:31.445 "is_configured": false, 00:09:31.445 "data_offset": 0, 00:09:31.445 "data_size": 63488 00:09:31.445 }, 00:09:31.445 { 00:09:31.445 "name": "BaseBdev3", 00:09:31.445 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:31.445 "is_configured": true, 00:09:31.445 "data_offset": 2048, 00:09:31.445 "data_size": 63488 00:09:31.445 } 00:09:31.445 ] 00:09:31.445 }' 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.445 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.704 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.704 [2024-11-18 10:37:57.574219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.963 "name": "Existed_Raid", 00:09:31.963 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:31.963 "strip_size_kb": 0, 00:09:31.963 "state": "configuring", 00:09:31.963 "raid_level": "raid1", 00:09:31.963 "superblock": true, 00:09:31.963 "num_base_bdevs": 3, 00:09:31.963 "num_base_bdevs_discovered": 1, 00:09:31.963 "num_base_bdevs_operational": 3, 00:09:31.963 "base_bdevs_list": [ 00:09:31.963 { 00:09:31.963 "name": null, 00:09:31.963 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:31.963 "is_configured": false, 00:09:31.963 "data_offset": 0, 00:09:31.963 "data_size": 63488 00:09:31.963 }, 00:09:31.963 { 00:09:31.963 "name": null, 00:09:31.963 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:31.963 "is_configured": false, 00:09:31.963 "data_offset": 0, 00:09:31.963 "data_size": 63488 00:09:31.963 }, 00:09:31.963 { 00:09:31.963 "name": "BaseBdev3", 00:09:31.963 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:31.963 "is_configured": true, 00:09:31.963 "data_offset": 2048, 00:09:31.963 "data_size": 63488 00:09:31.963 } 00:09:31.963 ] 00:09:31.963 }' 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.963 10:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.532 [2024-11-18 10:37:58.203548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.532 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.532 "name": "Existed_Raid", 00:09:32.532 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:32.532 "strip_size_kb": 0, 00:09:32.532 "state": "configuring", 00:09:32.532 "raid_level": "raid1", 00:09:32.532 "superblock": true, 00:09:32.532 "num_base_bdevs": 3, 00:09:32.532 "num_base_bdevs_discovered": 2, 00:09:32.533 "num_base_bdevs_operational": 3, 00:09:32.533 "base_bdevs_list": [ 00:09:32.533 { 00:09:32.533 "name": null, 00:09:32.533 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:32.533 "is_configured": false, 00:09:32.533 "data_offset": 0, 00:09:32.533 "data_size": 63488 00:09:32.533 }, 00:09:32.533 { 00:09:32.533 "name": "BaseBdev2", 00:09:32.533 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:32.533 "is_configured": true, 00:09:32.533 "data_offset": 2048, 00:09:32.533 "data_size": 63488 00:09:32.533 }, 00:09:32.533 { 00:09:32.533 "name": "BaseBdev3", 00:09:32.533 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:32.533 "is_configured": true, 00:09:32.533 "data_offset": 2048, 00:09:32.533 "data_size": 63488 00:09:32.533 } 00:09:32.533 ] 00:09:32.533 }' 00:09:32.533 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.533 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:32.792 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 68efcc2b-337e-4d80-a05a-f8a0e8d32112 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.053 [2024-11-18 10:37:58.736411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:33.053 [2024-11-18 10:37:58.736706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.053 [2024-11-18 10:37:58.736753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.053 [2024-11-18 10:37:58.737042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.053 [2024-11-18 10:37:58.737258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.053 [2024-11-18 10:37:58.737304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:33.053 NewBaseBdev 00:09:33.053 [2024-11-18 10:37:58.737480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.053 [ 00:09:33.053 { 00:09:33.053 "name": "NewBaseBdev", 00:09:33.053 "aliases": [ 00:09:33.053 "68efcc2b-337e-4d80-a05a-f8a0e8d32112" 00:09:33.053 ], 00:09:33.053 "product_name": "Malloc disk", 00:09:33.053 "block_size": 512, 00:09:33.053 "num_blocks": 65536, 00:09:33.053 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:33.053 "assigned_rate_limits": { 00:09:33.053 "rw_ios_per_sec": 0, 00:09:33.053 "rw_mbytes_per_sec": 0, 00:09:33.053 "r_mbytes_per_sec": 0, 00:09:33.053 "w_mbytes_per_sec": 0 00:09:33.053 }, 00:09:33.053 "claimed": true, 00:09:33.053 "claim_type": "exclusive_write", 00:09:33.053 "zoned": false, 00:09:33.053 "supported_io_types": { 00:09:33.053 "read": true, 00:09:33.053 "write": true, 00:09:33.053 "unmap": true, 00:09:33.053 "flush": true, 00:09:33.053 "reset": true, 00:09:33.053 "nvme_admin": false, 00:09:33.053 "nvme_io": false, 00:09:33.053 "nvme_io_md": false, 00:09:33.053 "write_zeroes": true, 00:09:33.053 "zcopy": true, 00:09:33.053 "get_zone_info": false, 00:09:33.053 "zone_management": false, 00:09:33.053 "zone_append": false, 00:09:33.053 "compare": false, 00:09:33.053 "compare_and_write": false, 00:09:33.053 "abort": true, 00:09:33.053 "seek_hole": false, 00:09:33.053 "seek_data": false, 00:09:33.053 "copy": true, 00:09:33.053 "nvme_iov_md": false 00:09:33.053 }, 00:09:33.053 "memory_domains": [ 00:09:33.053 { 00:09:33.053 "dma_device_id": "system", 00:09:33.053 "dma_device_type": 1 00:09:33.053 }, 00:09:33.053 { 00:09:33.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.053 "dma_device_type": 2 00:09:33.053 } 00:09:33.053 ], 00:09:33.053 "driver_specific": {} 00:09:33.053 } 00:09:33.053 ] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.053 "name": "Existed_Raid", 00:09:33.053 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:33.053 "strip_size_kb": 0, 00:09:33.053 "state": "online", 00:09:33.053 "raid_level": "raid1", 00:09:33.053 "superblock": true, 00:09:33.053 "num_base_bdevs": 3, 00:09:33.053 "num_base_bdevs_discovered": 3, 00:09:33.053 "num_base_bdevs_operational": 3, 00:09:33.053 "base_bdevs_list": [ 00:09:33.053 { 00:09:33.053 "name": "NewBaseBdev", 00:09:33.053 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:33.053 "is_configured": true, 00:09:33.053 "data_offset": 2048, 00:09:33.053 "data_size": 63488 00:09:33.053 }, 00:09:33.053 { 00:09:33.053 "name": "BaseBdev2", 00:09:33.053 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:33.053 "is_configured": true, 00:09:33.053 "data_offset": 2048, 00:09:33.053 "data_size": 63488 00:09:33.053 }, 00:09:33.053 { 00:09:33.053 "name": "BaseBdev3", 00:09:33.053 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:33.053 "is_configured": true, 00:09:33.053 "data_offset": 2048, 00:09:33.053 "data_size": 63488 00:09:33.053 } 00:09:33.053 ] 00:09:33.053 }' 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.053 10:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.624 [2024-11-18 10:37:59.211950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.624 "name": "Existed_Raid", 00:09:33.624 "aliases": [ 00:09:33.624 "4f8cc27f-1a37-4978-b6f2-9eac978c03c2" 00:09:33.624 ], 00:09:33.624 "product_name": "Raid Volume", 00:09:33.624 "block_size": 512, 00:09:33.624 "num_blocks": 63488, 00:09:33.624 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:33.624 "assigned_rate_limits": { 00:09:33.624 "rw_ios_per_sec": 0, 00:09:33.624 "rw_mbytes_per_sec": 0, 00:09:33.624 "r_mbytes_per_sec": 0, 00:09:33.624 "w_mbytes_per_sec": 0 00:09:33.624 }, 00:09:33.624 "claimed": false, 00:09:33.624 "zoned": false, 00:09:33.624 "supported_io_types": { 00:09:33.624 "read": true, 00:09:33.624 "write": true, 00:09:33.624 "unmap": false, 00:09:33.624 "flush": false, 00:09:33.624 "reset": true, 00:09:33.624 "nvme_admin": false, 00:09:33.624 "nvme_io": false, 00:09:33.624 "nvme_io_md": false, 00:09:33.624 "write_zeroes": true, 00:09:33.624 "zcopy": false, 00:09:33.624 "get_zone_info": false, 00:09:33.624 "zone_management": false, 00:09:33.624 "zone_append": false, 00:09:33.624 "compare": false, 00:09:33.624 "compare_and_write": false, 00:09:33.624 "abort": false, 00:09:33.624 "seek_hole": false, 00:09:33.624 "seek_data": false, 00:09:33.624 "copy": false, 00:09:33.624 "nvme_iov_md": false 00:09:33.624 }, 00:09:33.624 "memory_domains": [ 00:09:33.624 { 00:09:33.624 "dma_device_id": "system", 00:09:33.624 "dma_device_type": 1 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.624 "dma_device_type": 2 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "dma_device_id": "system", 00:09:33.624 "dma_device_type": 1 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.624 "dma_device_type": 2 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "dma_device_id": "system", 00:09:33.624 "dma_device_type": 1 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.624 "dma_device_type": 2 00:09:33.624 } 00:09:33.624 ], 00:09:33.624 "driver_specific": { 00:09:33.624 "raid": { 00:09:33.624 "uuid": "4f8cc27f-1a37-4978-b6f2-9eac978c03c2", 00:09:33.624 "strip_size_kb": 0, 00:09:33.624 "state": "online", 00:09:33.624 "raid_level": "raid1", 00:09:33.624 "superblock": true, 00:09:33.624 "num_base_bdevs": 3, 00:09:33.624 "num_base_bdevs_discovered": 3, 00:09:33.624 "num_base_bdevs_operational": 3, 00:09:33.624 "base_bdevs_list": [ 00:09:33.624 { 00:09:33.624 "name": "NewBaseBdev", 00:09:33.624 "uuid": "68efcc2b-337e-4d80-a05a-f8a0e8d32112", 00:09:33.624 "is_configured": true, 00:09:33.624 "data_offset": 2048, 00:09:33.624 "data_size": 63488 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "name": "BaseBdev2", 00:09:33.624 "uuid": "057bc2dc-3d5e-4b36-ad07-290dbf081171", 00:09:33.624 "is_configured": true, 00:09:33.624 "data_offset": 2048, 00:09:33.624 "data_size": 63488 00:09:33.624 }, 00:09:33.624 { 00:09:33.624 "name": "BaseBdev3", 00:09:33.624 "uuid": "4ddb0bcf-22d1-49db-9373-760c037c91b4", 00:09:33.624 "is_configured": true, 00:09:33.624 "data_offset": 2048, 00:09:33.624 "data_size": 63488 00:09:33.624 } 00:09:33.624 ] 00:09:33.624 } 00:09:33.624 } 00:09:33.624 }' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:33.624 BaseBdev2 00:09:33.624 BaseBdev3' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.624 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.624 [2024-11-18 10:37:59.479241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.624 [2024-11-18 10:37:59.479276] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.624 [2024-11-18 10:37:59.479357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.624 [2024-11-18 10:37:59.479666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.624 [2024-11-18 10:37:59.479677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67894 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67894 ']' 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67894 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.625 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67894 00:09:33.884 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.884 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.884 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67894' 00:09:33.884 killing process with pid 67894 00:09:33.884 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67894 00:09:33.884 [2024-11-18 10:37:59.531186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.884 10:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67894 00:09:34.146 [2024-11-18 10:37:59.850982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.532 10:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:35.532 00:09:35.532 real 0m10.649s 00:09:35.532 user 0m16.694s 00:09:35.532 sys 0m2.017s 00:09:35.532 10:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.532 10:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.532 ************************************ 00:09:35.532 END TEST raid_state_function_test_sb 00:09:35.532 ************************************ 00:09:35.532 10:38:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:35.532 10:38:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:35.533 10:38:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.533 10:38:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.533 ************************************ 00:09:35.533 START TEST raid_superblock_test 00:09:35.533 ************************************ 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68514 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68514 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68514 ']' 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.533 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.533 [2024-11-18 10:38:01.182072] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:35.533 [2024-11-18 10:38:01.182220] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68514 ] 00:09:35.533 [2024-11-18 10:38:01.360516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.793 [2024-11-18 10:38:01.493794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.052 [2024-11-18 10:38:01.724203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.052 [2024-11-18 10:38:01.724356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.313 malloc1 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.313 [2024-11-18 10:38:02.054365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.313 [2024-11-18 10:38:02.054474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.313 [2024-11-18 10:38:02.054516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:36.313 [2024-11-18 10:38:02.054545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.313 [2024-11-18 10:38:02.056978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.313 [2024-11-18 10:38:02.057055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.313 pt1 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.313 malloc2 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.313 [2024-11-18 10:38:02.118113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.313 [2024-11-18 10:38:02.118167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.313 [2024-11-18 10:38:02.118222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:36.313 [2024-11-18 10:38:02.118232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.313 [2024-11-18 10:38:02.120658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.313 [2024-11-18 10:38:02.120744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.313 pt2 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.313 malloc3 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.313 [2024-11-18 10:38:02.185100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:36.313 [2024-11-18 10:38:02.185214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.313 [2024-11-18 10:38:02.185251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:36.313 [2024-11-18 10:38:02.185284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.313 [2024-11-18 10:38:02.187630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.313 [2024-11-18 10:38:02.187708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:36.313 pt3 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.313 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.574 [2024-11-18 10:38:02.197142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.574 [2024-11-18 10:38:02.199251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.574 [2024-11-18 10:38:02.199378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.574 [2024-11-18 10:38:02.199573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:36.574 [2024-11-18 10:38:02.199626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.574 [2024-11-18 10:38:02.199897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:36.574 [2024-11-18 10:38:02.200117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:36.574 [2024-11-18 10:38:02.200161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:36.574 [2024-11-18 10:38:02.200372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.574 "name": "raid_bdev1", 00:09:36.574 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:36.574 "strip_size_kb": 0, 00:09:36.574 "state": "online", 00:09:36.574 "raid_level": "raid1", 00:09:36.574 "superblock": true, 00:09:36.574 "num_base_bdevs": 3, 00:09:36.574 "num_base_bdevs_discovered": 3, 00:09:36.574 "num_base_bdevs_operational": 3, 00:09:36.574 "base_bdevs_list": [ 00:09:36.574 { 00:09:36.574 "name": "pt1", 00:09:36.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.574 "is_configured": true, 00:09:36.574 "data_offset": 2048, 00:09:36.574 "data_size": 63488 00:09:36.574 }, 00:09:36.574 { 00:09:36.574 "name": "pt2", 00:09:36.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.574 "is_configured": true, 00:09:36.574 "data_offset": 2048, 00:09:36.574 "data_size": 63488 00:09:36.574 }, 00:09:36.574 { 00:09:36.574 "name": "pt3", 00:09:36.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.574 "is_configured": true, 00:09:36.574 "data_offset": 2048, 00:09:36.574 "data_size": 63488 00:09:36.574 } 00:09:36.574 ] 00:09:36.574 }' 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.574 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.834 [2024-11-18 10:38:02.636594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.834 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.834 "name": "raid_bdev1", 00:09:36.834 "aliases": [ 00:09:36.834 "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c" 00:09:36.834 ], 00:09:36.834 "product_name": "Raid Volume", 00:09:36.834 "block_size": 512, 00:09:36.834 "num_blocks": 63488, 00:09:36.834 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:36.834 "assigned_rate_limits": { 00:09:36.834 "rw_ios_per_sec": 0, 00:09:36.834 "rw_mbytes_per_sec": 0, 00:09:36.834 "r_mbytes_per_sec": 0, 00:09:36.834 "w_mbytes_per_sec": 0 00:09:36.834 }, 00:09:36.834 "claimed": false, 00:09:36.834 "zoned": false, 00:09:36.834 "supported_io_types": { 00:09:36.834 "read": true, 00:09:36.834 "write": true, 00:09:36.834 "unmap": false, 00:09:36.834 "flush": false, 00:09:36.834 "reset": true, 00:09:36.834 "nvme_admin": false, 00:09:36.834 "nvme_io": false, 00:09:36.834 "nvme_io_md": false, 00:09:36.834 "write_zeroes": true, 00:09:36.834 "zcopy": false, 00:09:36.834 "get_zone_info": false, 00:09:36.834 "zone_management": false, 00:09:36.834 "zone_append": false, 00:09:36.834 "compare": false, 00:09:36.834 "compare_and_write": false, 00:09:36.834 "abort": false, 00:09:36.834 "seek_hole": false, 00:09:36.834 "seek_data": false, 00:09:36.834 "copy": false, 00:09:36.834 "nvme_iov_md": false 00:09:36.834 }, 00:09:36.834 "memory_domains": [ 00:09:36.834 { 00:09:36.834 "dma_device_id": "system", 00:09:36.834 "dma_device_type": 1 00:09:36.834 }, 00:09:36.834 { 00:09:36.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.834 "dma_device_type": 2 00:09:36.834 }, 00:09:36.834 { 00:09:36.834 "dma_device_id": "system", 00:09:36.834 "dma_device_type": 1 00:09:36.834 }, 00:09:36.834 { 00:09:36.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.834 "dma_device_type": 2 00:09:36.834 }, 00:09:36.834 { 00:09:36.834 "dma_device_id": "system", 00:09:36.834 "dma_device_type": 1 00:09:36.834 }, 00:09:36.834 { 00:09:36.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.834 "dma_device_type": 2 00:09:36.834 } 00:09:36.834 ], 00:09:36.834 "driver_specific": { 00:09:36.834 "raid": { 00:09:36.834 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:36.834 "strip_size_kb": 0, 00:09:36.834 "state": "online", 00:09:36.834 "raid_level": "raid1", 00:09:36.834 "superblock": true, 00:09:36.834 "num_base_bdevs": 3, 00:09:36.834 "num_base_bdevs_discovered": 3, 00:09:36.834 "num_base_bdevs_operational": 3, 00:09:36.834 "base_bdevs_list": [ 00:09:36.834 { 00:09:36.834 "name": "pt1", 00:09:36.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.834 "is_configured": true, 00:09:36.834 "data_offset": 2048, 00:09:36.835 "data_size": 63488 00:09:36.835 }, 00:09:36.835 { 00:09:36.835 "name": "pt2", 00:09:36.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.835 "is_configured": true, 00:09:36.835 "data_offset": 2048, 00:09:36.835 "data_size": 63488 00:09:36.835 }, 00:09:36.835 { 00:09:36.835 "name": "pt3", 00:09:36.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.835 "is_configured": true, 00:09:36.835 "data_offset": 2048, 00:09:36.835 "data_size": 63488 00:09:36.835 } 00:09:36.835 ] 00:09:36.835 } 00:09:36.835 } 00:09:36.835 }' 00:09:36.835 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:37.095 pt2 00:09:37.095 pt3' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.095 [2024-11-18 10:38:02.904114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3c2e1c8-4e6a-4b2a-ad09-935854edd99c 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3c2e1c8-4e6a-4b2a-ad09-935854edd99c ']' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.095 [2024-11-18 10:38:02.951784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.095 [2024-11-18 10:38:02.951845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.095 [2024-11-18 10:38:02.951930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.095 [2024-11-18 10:38:02.952014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.095 [2024-11-18 10:38:02.952061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.095 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.356 10:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:37.356 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.357 [2024-11-18 10:38:03.087591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:37.357 [2024-11-18 10:38:03.089575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:37.357 [2024-11-18 10:38:03.089624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:37.357 [2024-11-18 10:38:03.089668] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:37.357 [2024-11-18 10:38:03.089715] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:37.357 [2024-11-18 10:38:03.089733] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:37.357 [2024-11-18 10:38:03.089747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.357 [2024-11-18 10:38:03.089756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:37.357 request: 00:09:37.357 { 00:09:37.357 "name": "raid_bdev1", 00:09:37.357 "raid_level": "raid1", 00:09:37.357 "base_bdevs": [ 00:09:37.357 "malloc1", 00:09:37.357 "malloc2", 00:09:37.357 "malloc3" 00:09:37.357 ], 00:09:37.357 "superblock": false, 00:09:37.357 "method": "bdev_raid_create", 00:09:37.357 "req_id": 1 00:09:37.357 } 00:09:37.357 Got JSON-RPC error response 00:09:37.357 response: 00:09:37.357 { 00:09:37.357 "code": -17, 00:09:37.357 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:37.357 } 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.357 [2024-11-18 10:38:03.147457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:37.357 [2024-11-18 10:38:03.147553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.357 [2024-11-18 10:38:03.147593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:37.357 [2024-11-18 10:38:03.147648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.357 [2024-11-18 10:38:03.149994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.357 [2024-11-18 10:38:03.150059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:37.357 [2024-11-18 10:38:03.150149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:37.357 [2024-11-18 10:38:03.150234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:37.357 pt1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.357 "name": "raid_bdev1", 00:09:37.357 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:37.357 "strip_size_kb": 0, 00:09:37.357 "state": "configuring", 00:09:37.357 "raid_level": "raid1", 00:09:37.357 "superblock": true, 00:09:37.357 "num_base_bdevs": 3, 00:09:37.357 "num_base_bdevs_discovered": 1, 00:09:37.357 "num_base_bdevs_operational": 3, 00:09:37.357 "base_bdevs_list": [ 00:09:37.357 { 00:09:37.357 "name": "pt1", 00:09:37.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.357 "is_configured": true, 00:09:37.357 "data_offset": 2048, 00:09:37.357 "data_size": 63488 00:09:37.357 }, 00:09:37.357 { 00:09:37.357 "name": null, 00:09:37.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.357 "is_configured": false, 00:09:37.357 "data_offset": 2048, 00:09:37.357 "data_size": 63488 00:09:37.357 }, 00:09:37.357 { 00:09:37.357 "name": null, 00:09:37.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.357 "is_configured": false, 00:09:37.357 "data_offset": 2048, 00:09:37.357 "data_size": 63488 00:09:37.357 } 00:09:37.357 ] 00:09:37.357 }' 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.357 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.927 [2024-11-18 10:38:03.598711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.927 [2024-11-18 10:38:03.598796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.927 [2024-11-18 10:38:03.598819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:37.927 [2024-11-18 10:38:03.598828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.927 [2024-11-18 10:38:03.599269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.927 [2024-11-18 10:38:03.599288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.927 [2024-11-18 10:38:03.599362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.927 [2024-11-18 10:38:03.599381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.927 pt2 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.927 [2024-11-18 10:38:03.610707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.927 "name": "raid_bdev1", 00:09:37.927 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:37.927 "strip_size_kb": 0, 00:09:37.927 "state": "configuring", 00:09:37.927 "raid_level": "raid1", 00:09:37.927 "superblock": true, 00:09:37.927 "num_base_bdevs": 3, 00:09:37.927 "num_base_bdevs_discovered": 1, 00:09:37.927 "num_base_bdevs_operational": 3, 00:09:37.927 "base_bdevs_list": [ 00:09:37.927 { 00:09:37.927 "name": "pt1", 00:09:37.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.927 "is_configured": true, 00:09:37.927 "data_offset": 2048, 00:09:37.927 "data_size": 63488 00:09:37.927 }, 00:09:37.927 { 00:09:37.927 "name": null, 00:09:37.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.927 "is_configured": false, 00:09:37.927 "data_offset": 0, 00:09:37.927 "data_size": 63488 00:09:37.927 }, 00:09:37.927 { 00:09:37.927 "name": null, 00:09:37.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.927 "is_configured": false, 00:09:37.927 "data_offset": 2048, 00:09:37.927 "data_size": 63488 00:09:37.927 } 00:09:37.927 ] 00:09:37.927 }' 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.927 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:38.188 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:38.188 10:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.188 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.188 10:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 [2024-11-18 10:38:03.998058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.188 [2024-11-18 10:38:03.998202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.188 [2024-11-18 10:38:03.998240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:38.188 [2024-11-18 10:38:03.998287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.188 [2024-11-18 10:38:03.998846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.188 [2024-11-18 10:38:03.998909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.188 [2024-11-18 10:38:03.999056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:38.188 [2024-11-18 10:38:03.999130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.188 pt2 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 [2024-11-18 10:38:04.009999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:38.188 [2024-11-18 10:38:04.010077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.188 [2024-11-18 10:38:04.010112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:38.188 [2024-11-18 10:38:04.010148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.188 [2024-11-18 10:38:04.010580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.188 [2024-11-18 10:38:04.010638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:38.188 [2024-11-18 10:38:04.010724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:38.188 [2024-11-18 10:38:04.010772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:38.188 [2024-11-18 10:38:04.010931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.188 [2024-11-18 10:38:04.010979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.188 [2024-11-18 10:38:04.011283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:38.188 [2024-11-18 10:38:04.011451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.188 [2024-11-18 10:38:04.011461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:38.188 [2024-11-18 10:38:04.011604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.188 pt3 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.188 "name": "raid_bdev1", 00:09:38.188 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:38.188 "strip_size_kb": 0, 00:09:38.188 "state": "online", 00:09:38.188 "raid_level": "raid1", 00:09:38.188 "superblock": true, 00:09:38.188 "num_base_bdevs": 3, 00:09:38.188 "num_base_bdevs_discovered": 3, 00:09:38.188 "num_base_bdevs_operational": 3, 00:09:38.188 "base_bdevs_list": [ 00:09:38.188 { 00:09:38.188 "name": "pt1", 00:09:38.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.188 "is_configured": true, 00:09:38.188 "data_offset": 2048, 00:09:38.188 "data_size": 63488 00:09:38.188 }, 00:09:38.188 { 00:09:38.188 "name": "pt2", 00:09:38.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.188 "is_configured": true, 00:09:38.188 "data_offset": 2048, 00:09:38.188 "data_size": 63488 00:09:38.188 }, 00:09:38.188 { 00:09:38.188 "name": "pt3", 00:09:38.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.188 "is_configured": true, 00:09:38.188 "data_offset": 2048, 00:09:38.188 "data_size": 63488 00:09:38.188 } 00:09:38.188 ] 00:09:38.188 }' 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.188 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.758 [2024-11-18 10:38:04.473517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.758 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.758 "name": "raid_bdev1", 00:09:38.758 "aliases": [ 00:09:38.758 "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c" 00:09:38.758 ], 00:09:38.758 "product_name": "Raid Volume", 00:09:38.758 "block_size": 512, 00:09:38.758 "num_blocks": 63488, 00:09:38.758 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:38.758 "assigned_rate_limits": { 00:09:38.758 "rw_ios_per_sec": 0, 00:09:38.758 "rw_mbytes_per_sec": 0, 00:09:38.758 "r_mbytes_per_sec": 0, 00:09:38.758 "w_mbytes_per_sec": 0 00:09:38.758 }, 00:09:38.758 "claimed": false, 00:09:38.758 "zoned": false, 00:09:38.758 "supported_io_types": { 00:09:38.758 "read": true, 00:09:38.758 "write": true, 00:09:38.758 "unmap": false, 00:09:38.758 "flush": false, 00:09:38.758 "reset": true, 00:09:38.758 "nvme_admin": false, 00:09:38.758 "nvme_io": false, 00:09:38.758 "nvme_io_md": false, 00:09:38.758 "write_zeroes": true, 00:09:38.758 "zcopy": false, 00:09:38.758 "get_zone_info": false, 00:09:38.758 "zone_management": false, 00:09:38.758 "zone_append": false, 00:09:38.758 "compare": false, 00:09:38.758 "compare_and_write": false, 00:09:38.758 "abort": false, 00:09:38.758 "seek_hole": false, 00:09:38.758 "seek_data": false, 00:09:38.758 "copy": false, 00:09:38.758 "nvme_iov_md": false 00:09:38.758 }, 00:09:38.758 "memory_domains": [ 00:09:38.758 { 00:09:38.758 "dma_device_id": "system", 00:09:38.758 "dma_device_type": 1 00:09:38.758 }, 00:09:38.758 { 00:09:38.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.758 "dma_device_type": 2 00:09:38.758 }, 00:09:38.758 { 00:09:38.758 "dma_device_id": "system", 00:09:38.758 "dma_device_type": 1 00:09:38.758 }, 00:09:38.758 { 00:09:38.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.758 "dma_device_type": 2 00:09:38.758 }, 00:09:38.758 { 00:09:38.758 "dma_device_id": "system", 00:09:38.758 "dma_device_type": 1 00:09:38.758 }, 00:09:38.758 { 00:09:38.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.758 "dma_device_type": 2 00:09:38.759 } 00:09:38.759 ], 00:09:38.759 "driver_specific": { 00:09:38.759 "raid": { 00:09:38.759 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:38.759 "strip_size_kb": 0, 00:09:38.759 "state": "online", 00:09:38.759 "raid_level": "raid1", 00:09:38.759 "superblock": true, 00:09:38.759 "num_base_bdevs": 3, 00:09:38.759 "num_base_bdevs_discovered": 3, 00:09:38.759 "num_base_bdevs_operational": 3, 00:09:38.759 "base_bdevs_list": [ 00:09:38.759 { 00:09:38.759 "name": "pt1", 00:09:38.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.759 "is_configured": true, 00:09:38.759 "data_offset": 2048, 00:09:38.759 "data_size": 63488 00:09:38.759 }, 00:09:38.759 { 00:09:38.759 "name": "pt2", 00:09:38.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.759 "is_configured": true, 00:09:38.759 "data_offset": 2048, 00:09:38.759 "data_size": 63488 00:09:38.759 }, 00:09:38.759 { 00:09:38.759 "name": "pt3", 00:09:38.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.759 "is_configured": true, 00:09:38.759 "data_offset": 2048, 00:09:38.759 "data_size": 63488 00:09:38.759 } 00:09:38.759 ] 00:09:38.759 } 00:09:38.759 } 00:09:38.759 }' 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:38.759 pt2 00:09:38.759 pt3' 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.759 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 [2024-11-18 10:38:04.764900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3c2e1c8-4e6a-4b2a-ad09-935854edd99c '!=' b3c2e1c8-4e6a-4b2a-ad09-935854edd99c ']' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 [2024-11-18 10:38:04.808632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.020 "name": "raid_bdev1", 00:09:39.020 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:39.020 "strip_size_kb": 0, 00:09:39.020 "state": "online", 00:09:39.020 "raid_level": "raid1", 00:09:39.020 "superblock": true, 00:09:39.020 "num_base_bdevs": 3, 00:09:39.020 "num_base_bdevs_discovered": 2, 00:09:39.020 "num_base_bdevs_operational": 2, 00:09:39.020 "base_bdevs_list": [ 00:09:39.020 { 00:09:39.020 "name": null, 00:09:39.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.020 "is_configured": false, 00:09:39.020 "data_offset": 0, 00:09:39.020 "data_size": 63488 00:09:39.020 }, 00:09:39.020 { 00:09:39.020 "name": "pt2", 00:09:39.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.020 "is_configured": true, 00:09:39.020 "data_offset": 2048, 00:09:39.020 "data_size": 63488 00:09:39.020 }, 00:09:39.020 { 00:09:39.020 "name": "pt3", 00:09:39.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.020 "is_configured": true, 00:09:39.020 "data_offset": 2048, 00:09:39.020 "data_size": 63488 00:09:39.020 } 00:09:39.020 ] 00:09:39.020 }' 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.020 10:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 [2024-11-18 10:38:05.247833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.591 [2024-11-18 10:38:05.247897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.591 [2024-11-18 10:38:05.247984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.591 [2024-11-18 10:38:05.248054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.591 [2024-11-18 10:38:05.248113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 [2024-11-18 10:38:05.335663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.591 [2024-11-18 10:38:05.335714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.591 [2024-11-18 10:38:05.335731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:39.591 [2024-11-18 10:38:05.335743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.591 [2024-11-18 10:38:05.338225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.591 [2024-11-18 10:38:05.338313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.591 [2024-11-18 10:38:05.338395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:39.591 [2024-11-18 10:38:05.338452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.591 pt2 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.591 "name": "raid_bdev1", 00:09:39.591 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:39.591 "strip_size_kb": 0, 00:09:39.591 "state": "configuring", 00:09:39.591 "raid_level": "raid1", 00:09:39.591 "superblock": true, 00:09:39.591 "num_base_bdevs": 3, 00:09:39.591 "num_base_bdevs_discovered": 1, 00:09:39.591 "num_base_bdevs_operational": 2, 00:09:39.591 "base_bdevs_list": [ 00:09:39.591 { 00:09:39.591 "name": null, 00:09:39.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.591 "is_configured": false, 00:09:39.591 "data_offset": 2048, 00:09:39.591 "data_size": 63488 00:09:39.591 }, 00:09:39.591 { 00:09:39.591 "name": "pt2", 00:09:39.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.591 "is_configured": true, 00:09:39.591 "data_offset": 2048, 00:09:39.591 "data_size": 63488 00:09:39.591 }, 00:09:39.591 { 00:09:39.591 "name": null, 00:09:39.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.591 "is_configured": false, 00:09:39.591 "data_offset": 2048, 00:09:39.591 "data_size": 63488 00:09:39.591 } 00:09:39.591 ] 00:09:39.591 }' 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.591 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.161 [2024-11-18 10:38:05.762999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.161 [2024-11-18 10:38:05.763086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.161 [2024-11-18 10:38:05.763120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:40.161 [2024-11-18 10:38:05.763149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.161 [2024-11-18 10:38:05.763604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.161 [2024-11-18 10:38:05.763626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.161 [2024-11-18 10:38:05.763703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:40.161 [2024-11-18 10:38:05.763732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.161 [2024-11-18 10:38:05.763855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.161 [2024-11-18 10:38:05.763867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.161 [2024-11-18 10:38:05.764134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:40.161 [2024-11-18 10:38:05.764319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.161 [2024-11-18 10:38:05.764334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:40.161 [2024-11-18 10:38:05.764482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.161 pt3 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.161 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.161 "name": "raid_bdev1", 00:09:40.161 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:40.161 "strip_size_kb": 0, 00:09:40.161 "state": "online", 00:09:40.161 "raid_level": "raid1", 00:09:40.161 "superblock": true, 00:09:40.161 "num_base_bdevs": 3, 00:09:40.161 "num_base_bdevs_discovered": 2, 00:09:40.161 "num_base_bdevs_operational": 2, 00:09:40.161 "base_bdevs_list": [ 00:09:40.161 { 00:09:40.161 "name": null, 00:09:40.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.161 "is_configured": false, 00:09:40.161 "data_offset": 2048, 00:09:40.161 "data_size": 63488 00:09:40.161 }, 00:09:40.161 { 00:09:40.161 "name": "pt2", 00:09:40.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.161 "is_configured": true, 00:09:40.161 "data_offset": 2048, 00:09:40.161 "data_size": 63488 00:09:40.161 }, 00:09:40.161 { 00:09:40.161 "name": "pt3", 00:09:40.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.161 "is_configured": true, 00:09:40.161 "data_offset": 2048, 00:09:40.161 "data_size": 63488 00:09:40.161 } 00:09:40.161 ] 00:09:40.162 }' 00:09:40.162 10:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.162 10:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.421 [2024-11-18 10:38:06.194242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.421 [2024-11-18 10:38:06.194309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.421 [2024-11-18 10:38:06.194394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.421 [2024-11-18 10:38:06.194464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.421 [2024-11-18 10:38:06.194497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:40.421 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.422 [2024-11-18 10:38:06.270124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.422 [2024-11-18 10:38:06.270225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.422 [2024-11-18 10:38:06.270281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:40.422 [2024-11-18 10:38:06.270317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.422 [2024-11-18 10:38:06.272772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.422 [2024-11-18 10:38:06.272845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.422 [2024-11-18 10:38:06.272948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:40.422 [2024-11-18 10:38:06.273009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.422 [2024-11-18 10:38:06.273167] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:40.422 [2024-11-18 10:38:06.273235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.422 [2024-11-18 10:38:06.273273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:40.422 [2024-11-18 10:38:06.273373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.422 pt1 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.422 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.681 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.681 "name": "raid_bdev1", 00:09:40.681 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:40.681 "strip_size_kb": 0, 00:09:40.681 "state": "configuring", 00:09:40.681 "raid_level": "raid1", 00:09:40.681 "superblock": true, 00:09:40.681 "num_base_bdevs": 3, 00:09:40.681 "num_base_bdevs_discovered": 1, 00:09:40.681 "num_base_bdevs_operational": 2, 00:09:40.682 "base_bdevs_list": [ 00:09:40.682 { 00:09:40.682 "name": null, 00:09:40.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.682 "is_configured": false, 00:09:40.682 "data_offset": 2048, 00:09:40.682 "data_size": 63488 00:09:40.682 }, 00:09:40.682 { 00:09:40.682 "name": "pt2", 00:09:40.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.682 "is_configured": true, 00:09:40.682 "data_offset": 2048, 00:09:40.682 "data_size": 63488 00:09:40.682 }, 00:09:40.682 { 00:09:40.682 "name": null, 00:09:40.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.682 "is_configured": false, 00:09:40.682 "data_offset": 2048, 00:09:40.682 "data_size": 63488 00:09:40.682 } 00:09:40.682 ] 00:09:40.682 }' 00:09:40.682 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.682 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.941 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.941 [2024-11-18 10:38:06.721333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.941 [2024-11-18 10:38:06.721387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.941 [2024-11-18 10:38:06.721408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:40.941 [2024-11-18 10:38:06.721417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.941 [2024-11-18 10:38:06.721874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.941 [2024-11-18 10:38:06.721889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.941 [2024-11-18 10:38:06.721964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:40.941 [2024-11-18 10:38:06.722007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.941 [2024-11-18 10:38:06.722127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:40.941 [2024-11-18 10:38:06.722135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.941 [2024-11-18 10:38:06.722411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:40.941 [2024-11-18 10:38:06.722577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:40.941 [2024-11-18 10:38:06.722589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:40.942 [2024-11-18 10:38:06.722739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.942 pt3 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.942 "name": "raid_bdev1", 00:09:40.942 "uuid": "b3c2e1c8-4e6a-4b2a-ad09-935854edd99c", 00:09:40.942 "strip_size_kb": 0, 00:09:40.942 "state": "online", 00:09:40.942 "raid_level": "raid1", 00:09:40.942 "superblock": true, 00:09:40.942 "num_base_bdevs": 3, 00:09:40.942 "num_base_bdevs_discovered": 2, 00:09:40.942 "num_base_bdevs_operational": 2, 00:09:40.942 "base_bdevs_list": [ 00:09:40.942 { 00:09:40.942 "name": null, 00:09:40.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.942 "is_configured": false, 00:09:40.942 "data_offset": 2048, 00:09:40.942 "data_size": 63488 00:09:40.942 }, 00:09:40.942 { 00:09:40.942 "name": "pt2", 00:09:40.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.942 "is_configured": true, 00:09:40.942 "data_offset": 2048, 00:09:40.942 "data_size": 63488 00:09:40.942 }, 00:09:40.942 { 00:09:40.942 "name": "pt3", 00:09:40.942 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.942 "is_configured": true, 00:09:40.942 "data_offset": 2048, 00:09:40.942 "data_size": 63488 00:09:40.942 } 00:09:40.942 ] 00:09:40.942 }' 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.942 10:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.511 [2024-11-18 10:38:07.172813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b3c2e1c8-4e6a-4b2a-ad09-935854edd99c '!=' b3c2e1c8-4e6a-4b2a-ad09-935854edd99c ']' 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68514 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68514 ']' 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68514 00:09:41.511 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68514 00:09:41.512 killing process with pid 68514 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68514' 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68514 00:09:41.512 [2024-11-18 10:38:07.225175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.512 [2024-11-18 10:38:07.225275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.512 [2024-11-18 10:38:07.225328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.512 [2024-11-18 10:38:07.225339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:41.512 10:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68514 00:09:41.772 [2024-11-18 10:38:07.540322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.154 10:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:43.154 00:09:43.154 real 0m7.612s 00:09:43.154 user 0m11.709s 00:09:43.154 sys 0m1.462s 00:09:43.154 10:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.154 10:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 ************************************ 00:09:43.154 END TEST raid_superblock_test 00:09:43.154 ************************************ 00:09:43.154 10:38:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:43.154 10:38:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.154 10:38:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.154 10:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 ************************************ 00:09:43.154 START TEST raid_read_error_test 00:09:43.154 ************************************ 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ODqwFXTTBI 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68960 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68960 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68960 ']' 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.154 10:38:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.154 [2024-11-18 10:38:08.879895] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.154 [2024-11-18 10:38:08.880086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68960 ] 00:09:43.414 [2024-11-18 10:38:09.051633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.415 [2024-11-18 10:38:09.180675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.673 [2024-11-18 10:38:09.409283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.673 [2024-11-18 10:38:09.409426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.933 BaseBdev1_malloc 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.933 true 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.933 [2024-11-18 10:38:09.758841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.933 [2024-11-18 10:38:09.758901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.933 [2024-11-18 10:38:09.758937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.933 [2024-11-18 10:38:09.758949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.933 [2024-11-18 10:38:09.761267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.933 [2024-11-18 10:38:09.761355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.933 BaseBdev1 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.933 BaseBdev2_malloc 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.933 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 true 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 [2024-11-18 10:38:09.831450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:44.193 [2024-11-18 10:38:09.831510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.193 [2024-11-18 10:38:09.831543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:44.193 [2024-11-18 10:38:09.831555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.193 [2024-11-18 10:38:09.833886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.193 [2024-11-18 10:38:09.833923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:44.193 BaseBdev2 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 BaseBdev3_malloc 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 true 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 [2024-11-18 10:38:09.938479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:44.193 [2024-11-18 10:38:09.938589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.193 [2024-11-18 10:38:09.938624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:44.193 [2024-11-18 10:38:09.938636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.193 [2024-11-18 10:38:09.940978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.193 [2024-11-18 10:38:09.941018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:44.193 BaseBdev3 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 [2024-11-18 10:38:09.950532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.193 [2024-11-18 10:38:09.952620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.193 [2024-11-18 10:38:09.952689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.193 [2024-11-18 10:38:09.952883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.193 [2024-11-18 10:38:09.952895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.193 [2024-11-18 10:38:09.953128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:44.193 [2024-11-18 10:38:09.953313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.193 [2024-11-18 10:38:09.953326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:44.193 [2024-11-18 10:38:09.953466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.193 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.194 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.194 10:38:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.194 10:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.194 "name": "raid_bdev1", 00:09:44.194 "uuid": "7370c71b-a621-4ab0-a9fd-5cf546c11543", 00:09:44.194 "strip_size_kb": 0, 00:09:44.194 "state": "online", 00:09:44.194 "raid_level": "raid1", 00:09:44.194 "superblock": true, 00:09:44.194 "num_base_bdevs": 3, 00:09:44.194 "num_base_bdevs_discovered": 3, 00:09:44.194 "num_base_bdevs_operational": 3, 00:09:44.194 "base_bdevs_list": [ 00:09:44.194 { 00:09:44.194 "name": "BaseBdev1", 00:09:44.194 "uuid": "990f6481-8214-5e7f-b130-7bd8fedf999e", 00:09:44.194 "is_configured": true, 00:09:44.194 "data_offset": 2048, 00:09:44.194 "data_size": 63488 00:09:44.194 }, 00:09:44.194 { 00:09:44.194 "name": "BaseBdev2", 00:09:44.194 "uuid": "9ecab6b8-dcfb-5662-a655-4885eb3ccbb8", 00:09:44.194 "is_configured": true, 00:09:44.194 "data_offset": 2048, 00:09:44.194 "data_size": 63488 00:09:44.194 }, 00:09:44.194 { 00:09:44.194 "name": "BaseBdev3", 00:09:44.194 "uuid": "13f5a14a-4008-5db6-9817-65d8c0241108", 00:09:44.194 "is_configured": true, 00:09:44.194 "data_offset": 2048, 00:09:44.194 "data_size": 63488 00:09:44.194 } 00:09:44.194 ] 00:09:44.194 }' 00:09:44.194 10:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.194 10:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.763 10:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.763 10:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.763 [2024-11-18 10:38:10.446848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.704 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.704 "name": "raid_bdev1", 00:09:45.704 "uuid": "7370c71b-a621-4ab0-a9fd-5cf546c11543", 00:09:45.704 "strip_size_kb": 0, 00:09:45.704 "state": "online", 00:09:45.704 "raid_level": "raid1", 00:09:45.704 "superblock": true, 00:09:45.704 "num_base_bdevs": 3, 00:09:45.704 "num_base_bdevs_discovered": 3, 00:09:45.704 "num_base_bdevs_operational": 3, 00:09:45.704 "base_bdevs_list": [ 00:09:45.704 { 00:09:45.704 "name": "BaseBdev1", 00:09:45.704 "uuid": "990f6481-8214-5e7f-b130-7bd8fedf999e", 00:09:45.704 "is_configured": true, 00:09:45.704 "data_offset": 2048, 00:09:45.704 "data_size": 63488 00:09:45.704 }, 00:09:45.704 { 00:09:45.705 "name": "BaseBdev2", 00:09:45.705 "uuid": "9ecab6b8-dcfb-5662-a655-4885eb3ccbb8", 00:09:45.705 "is_configured": true, 00:09:45.705 "data_offset": 2048, 00:09:45.705 "data_size": 63488 00:09:45.705 }, 00:09:45.705 { 00:09:45.705 "name": "BaseBdev3", 00:09:45.705 "uuid": "13f5a14a-4008-5db6-9817-65d8c0241108", 00:09:45.705 "is_configured": true, 00:09:45.705 "data_offset": 2048, 00:09:45.705 "data_size": 63488 00:09:45.705 } 00:09:45.705 ] 00:09:45.705 }' 00:09:45.705 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.705 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.965 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.965 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.965 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.965 [2024-11-18 10:38:11.844202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.965 [2024-11-18 10:38:11.844243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.965 [2024-11-18 10:38:11.846969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.965 [2024-11-18 10:38:11.847026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.965 [2024-11-18 10:38:11.847135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.965 [2024-11-18 10:38:11.847146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:46.225 { 00:09:46.225 "results": [ 00:09:46.225 { 00:09:46.225 "job": "raid_bdev1", 00:09:46.225 "core_mask": "0x1", 00:09:46.225 "workload": "randrw", 00:09:46.225 "percentage": 50, 00:09:46.225 "status": "finished", 00:09:46.225 "queue_depth": 1, 00:09:46.225 "io_size": 131072, 00:09:46.225 "runtime": 1.398181, 00:09:46.225 "iops": 10816.196186330668, 00:09:46.225 "mibps": 1352.0245232913335, 00:09:46.225 "io_failed": 0, 00:09:46.225 "io_timeout": 0, 00:09:46.225 "avg_latency_us": 90.07050205779854, 00:09:46.225 "min_latency_us": 22.022707423580787, 00:09:46.225 "max_latency_us": 1552.5449781659388 00:09:46.225 } 00:09:46.225 ], 00:09:46.225 "core_count": 1 00:09:46.225 } 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68960 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68960 ']' 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68960 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68960 00:09:46.225 killing process with pid 68960 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.225 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.226 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68960' 00:09:46.226 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68960 00:09:46.226 [2024-11-18 10:38:11.889847] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.226 10:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68960 00:09:46.485 [2024-11-18 10:38:12.134979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ODqwFXTTBI 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:47.867 00:09:47.867 real 0m4.583s 00:09:47.867 user 0m5.290s 00:09:47.867 sys 0m0.645s 00:09:47.867 ************************************ 00:09:47.867 END TEST raid_read_error_test 00:09:47.867 ************************************ 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.867 10:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.867 10:38:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:47.867 10:38:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.867 10:38:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.867 10:38:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.867 ************************************ 00:09:47.867 START TEST raid_write_error_test 00:09:47.867 ************************************ 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.867 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qG1X5LRulC 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69100 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69100 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69100 ']' 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.868 10:38:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.868 [2024-11-18 10:38:13.536610] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:47.868 [2024-11-18 10:38:13.536830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69100 ] 00:09:47.868 [2024-11-18 10:38:13.712769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.128 [2024-11-18 10:38:13.844056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.386 [2024-11-18 10:38:14.075252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.386 [2024-11-18 10:38:14.075294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.646 BaseBdev1_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.646 true 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.646 [2024-11-18 10:38:14.418398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.646 [2024-11-18 10:38:14.418465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.646 [2024-11-18 10:38:14.418502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.646 [2024-11-18 10:38:14.418513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.646 [2024-11-18 10:38:14.420839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.646 [2024-11-18 10:38:14.420921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.646 BaseBdev1 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.646 BaseBdev2_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.646 true 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.646 [2024-11-18 10:38:14.492009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.646 [2024-11-18 10:38:14.492068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.646 [2024-11-18 10:38:14.492084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.646 [2024-11-18 10:38:14.492096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.646 [2024-11-18 10:38:14.494397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.646 [2024-11-18 10:38:14.494487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.646 BaseBdev2 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.646 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 BaseBdev3_malloc 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 true 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 [2024-11-18 10:38:14.592125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:48.906 [2024-11-18 10:38:14.592196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.906 [2024-11-18 10:38:14.592214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.906 [2024-11-18 10:38:14.592226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.906 [2024-11-18 10:38:14.594554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.906 [2024-11-18 10:38:14.594658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:48.906 BaseBdev3 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 [2024-11-18 10:38:14.604188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.906 [2024-11-18 10:38:14.606219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.906 [2024-11-18 10:38:14.606292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.906 [2024-11-18 10:38:14.606493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.906 [2024-11-18 10:38:14.606505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.906 [2024-11-18 10:38:14.606739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:48.906 [2024-11-18 10:38:14.606911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.906 [2024-11-18 10:38:14.606924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.906 [2024-11-18 10:38:14.607101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.906 "name": "raid_bdev1", 00:09:48.906 "uuid": "12d942f3-b1be-43d1-9f74-a43ad9671daf", 00:09:48.906 "strip_size_kb": 0, 00:09:48.906 "state": "online", 00:09:48.906 "raid_level": "raid1", 00:09:48.906 "superblock": true, 00:09:48.906 "num_base_bdevs": 3, 00:09:48.906 "num_base_bdevs_discovered": 3, 00:09:48.906 "num_base_bdevs_operational": 3, 00:09:48.906 "base_bdevs_list": [ 00:09:48.906 { 00:09:48.906 "name": "BaseBdev1", 00:09:48.906 "uuid": "66705390-8299-551a-afc1-1c207ebab731", 00:09:48.906 "is_configured": true, 00:09:48.906 "data_offset": 2048, 00:09:48.906 "data_size": 63488 00:09:48.906 }, 00:09:48.906 { 00:09:48.906 "name": "BaseBdev2", 00:09:48.906 "uuid": "5a4ca8e7-77cd-5ea3-a0d9-8be1f65e87bb", 00:09:48.906 "is_configured": true, 00:09:48.906 "data_offset": 2048, 00:09:48.906 "data_size": 63488 00:09:48.906 }, 00:09:48.906 { 00:09:48.906 "name": "BaseBdev3", 00:09:48.906 "uuid": "cbe4ec76-c210-52c2-b242-96008692622b", 00:09:48.906 "is_configured": true, 00:09:48.906 "data_offset": 2048, 00:09:48.906 "data_size": 63488 00:09:48.906 } 00:09:48.906 ] 00:09:48.906 }' 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.906 10:38:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.475 10:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.475 10:38:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.475 [2024-11-18 10:38:15.172549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.415 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.415 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 [2024-11-18 10:38:16.091510] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:50.416 [2024-11-18 10:38:16.091670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.416 [2024-11-18 10:38:16.091936] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.416 "name": "raid_bdev1", 00:09:50.416 "uuid": "12d942f3-b1be-43d1-9f74-a43ad9671daf", 00:09:50.416 "strip_size_kb": 0, 00:09:50.416 "state": "online", 00:09:50.416 "raid_level": "raid1", 00:09:50.416 "superblock": true, 00:09:50.416 "num_base_bdevs": 3, 00:09:50.416 "num_base_bdevs_discovered": 2, 00:09:50.416 "num_base_bdevs_operational": 2, 00:09:50.416 "base_bdevs_list": [ 00:09:50.416 { 00:09:50.416 "name": null, 00:09:50.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.416 "is_configured": false, 00:09:50.416 "data_offset": 0, 00:09:50.416 "data_size": 63488 00:09:50.416 }, 00:09:50.416 { 00:09:50.416 "name": "BaseBdev2", 00:09:50.416 "uuid": "5a4ca8e7-77cd-5ea3-a0d9-8be1f65e87bb", 00:09:50.416 "is_configured": true, 00:09:50.416 "data_offset": 2048, 00:09:50.416 "data_size": 63488 00:09:50.416 }, 00:09:50.416 { 00:09:50.416 "name": "BaseBdev3", 00:09:50.416 "uuid": "cbe4ec76-c210-52c2-b242-96008692622b", 00:09:50.416 "is_configured": true, 00:09:50.416 "data_offset": 2048, 00:09:50.416 "data_size": 63488 00:09:50.416 } 00:09:50.416 ] 00:09:50.416 }' 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.416 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.675 [2024-11-18 10:38:16.534929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.675 [2024-11-18 10:38:16.535075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.675 [2024-11-18 10:38:16.537709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.675 [2024-11-18 10:38:16.537826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.675 [2024-11-18 10:38:16.537932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.675 [2024-11-18 10:38:16.537988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.675 { 00:09:50.675 "results": [ 00:09:50.675 { 00:09:50.675 "job": "raid_bdev1", 00:09:50.675 "core_mask": "0x1", 00:09:50.675 "workload": "randrw", 00:09:50.675 "percentage": 50, 00:09:50.675 "status": "finished", 00:09:50.675 "queue_depth": 1, 00:09:50.675 "io_size": 131072, 00:09:50.675 "runtime": 1.363053, 00:09:50.675 "iops": 12123.519775093118, 00:09:50.675 "mibps": 1515.4399718866398, 00:09:50.675 "io_failed": 0, 00:09:50.675 "io_timeout": 0, 00:09:50.675 "avg_latency_us": 80.06101878191704, 00:09:50.675 "min_latency_us": 21.799126637554586, 00:09:50.675 "max_latency_us": 1366.5257641921398 00:09:50.675 } 00:09:50.675 ], 00:09:50.675 "core_count": 1 00:09:50.675 } 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69100 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69100 ']' 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69100 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.675 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69100 00:09:50.935 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.935 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.935 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69100' 00:09:50.935 killing process with pid 69100 00:09:50.935 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69100 00:09:50.935 [2024-11-18 10:38:16.582714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.935 10:38:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69100 00:09:51.195 [2024-11-18 10:38:16.825732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.579 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qG1X5LRulC 00:09:52.579 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.579 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:52.580 ************************************ 00:09:52.580 END TEST raid_write_error_test 00:09:52.580 ************************************ 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:52.580 00:09:52.580 real 0m4.622s 00:09:52.580 user 0m5.362s 00:09:52.580 sys 0m0.663s 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.580 10:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.580 10:38:18 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:52.580 10:38:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.580 10:38:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:52.580 10:38:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.580 10:38:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.580 10:38:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.580 ************************************ 00:09:52.580 START TEST raid_state_function_test 00:09:52.580 ************************************ 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.580 Process raid pid: 69244 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69244 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69244' 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69244 00:09:52.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69244 ']' 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.580 10:38:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.580 [2024-11-18 10:38:18.228439] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:52.580 [2024-11-18 10:38:18.229038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.580 [2024-11-18 10:38:18.408847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.840 [2024-11-18 10:38:18.544832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.099 [2024-11-18 10:38:18.774056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.099 [2024-11-18 10:38:18.774097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.359 [2024-11-18 10:38:19.047509] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.359 [2024-11-18 10:38:19.047565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.359 [2024-11-18 10:38:19.047576] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.359 [2024-11-18 10:38:19.047586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.359 [2024-11-18 10:38:19.047592] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.359 [2024-11-18 10:38:19.047601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.359 [2024-11-18 10:38:19.047607] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.359 [2024-11-18 10:38:19.047616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.359 "name": "Existed_Raid", 00:09:53.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.359 "strip_size_kb": 64, 00:09:53.359 "state": "configuring", 00:09:53.359 "raid_level": "raid0", 00:09:53.359 "superblock": false, 00:09:53.359 "num_base_bdevs": 4, 00:09:53.359 "num_base_bdevs_discovered": 0, 00:09:53.359 "num_base_bdevs_operational": 4, 00:09:53.359 "base_bdevs_list": [ 00:09:53.359 { 00:09:53.359 "name": "BaseBdev1", 00:09:53.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.359 "is_configured": false, 00:09:53.359 "data_offset": 0, 00:09:53.359 "data_size": 0 00:09:53.359 }, 00:09:53.359 { 00:09:53.359 "name": "BaseBdev2", 00:09:53.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.359 "is_configured": false, 00:09:53.359 "data_offset": 0, 00:09:53.359 "data_size": 0 00:09:53.359 }, 00:09:53.359 { 00:09:53.359 "name": "BaseBdev3", 00:09:53.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.359 "is_configured": false, 00:09:53.359 "data_offset": 0, 00:09:53.359 "data_size": 0 00:09:53.359 }, 00:09:53.359 { 00:09:53.359 "name": "BaseBdev4", 00:09:53.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.359 "is_configured": false, 00:09:53.359 "data_offset": 0, 00:09:53.359 "data_size": 0 00:09:53.359 } 00:09:53.359 ] 00:09:53.359 }' 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.359 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.619 [2024-11-18 10:38:19.454847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.619 [2024-11-18 10:38:19.454983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.619 [2024-11-18 10:38:19.462801] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.619 [2024-11-18 10:38:19.462889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.619 [2024-11-18 10:38:19.462918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.619 [2024-11-18 10:38:19.462941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.619 [2024-11-18 10:38:19.462965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.619 [2024-11-18 10:38:19.463003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.619 [2024-11-18 10:38:19.463021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.619 [2024-11-18 10:38:19.463042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.619 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.879 [2024-11-18 10:38:19.511841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.879 BaseBdev1 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.879 [ 00:09:53.879 { 00:09:53.879 "name": "BaseBdev1", 00:09:53.879 "aliases": [ 00:09:53.879 "7425183f-4d18-499d-9557-c1927d472180" 00:09:53.879 ], 00:09:53.879 "product_name": "Malloc disk", 00:09:53.879 "block_size": 512, 00:09:53.879 "num_blocks": 65536, 00:09:53.879 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:53.879 "assigned_rate_limits": { 00:09:53.879 "rw_ios_per_sec": 0, 00:09:53.879 "rw_mbytes_per_sec": 0, 00:09:53.879 "r_mbytes_per_sec": 0, 00:09:53.879 "w_mbytes_per_sec": 0 00:09:53.879 }, 00:09:53.879 "claimed": true, 00:09:53.879 "claim_type": "exclusive_write", 00:09:53.879 "zoned": false, 00:09:53.879 "supported_io_types": { 00:09:53.879 "read": true, 00:09:53.879 "write": true, 00:09:53.879 "unmap": true, 00:09:53.879 "flush": true, 00:09:53.879 "reset": true, 00:09:53.879 "nvme_admin": false, 00:09:53.879 "nvme_io": false, 00:09:53.879 "nvme_io_md": false, 00:09:53.879 "write_zeroes": true, 00:09:53.879 "zcopy": true, 00:09:53.879 "get_zone_info": false, 00:09:53.879 "zone_management": false, 00:09:53.879 "zone_append": false, 00:09:53.879 "compare": false, 00:09:53.879 "compare_and_write": false, 00:09:53.879 "abort": true, 00:09:53.879 "seek_hole": false, 00:09:53.879 "seek_data": false, 00:09:53.879 "copy": true, 00:09:53.879 "nvme_iov_md": false 00:09:53.879 }, 00:09:53.879 "memory_domains": [ 00:09:53.879 { 00:09:53.879 "dma_device_id": "system", 00:09:53.879 "dma_device_type": 1 00:09:53.879 }, 00:09:53.879 { 00:09:53.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.879 "dma_device_type": 2 00:09:53.879 } 00:09:53.879 ], 00:09:53.879 "driver_specific": {} 00:09:53.879 } 00:09:53.879 ] 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.879 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.880 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.880 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.880 "name": "Existed_Raid", 00:09:53.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.880 "strip_size_kb": 64, 00:09:53.880 "state": "configuring", 00:09:53.880 "raid_level": "raid0", 00:09:53.880 "superblock": false, 00:09:53.880 "num_base_bdevs": 4, 00:09:53.880 "num_base_bdevs_discovered": 1, 00:09:53.880 "num_base_bdevs_operational": 4, 00:09:53.880 "base_bdevs_list": [ 00:09:53.880 { 00:09:53.880 "name": "BaseBdev1", 00:09:53.880 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:53.880 "is_configured": true, 00:09:53.880 "data_offset": 0, 00:09:53.880 "data_size": 65536 00:09:53.880 }, 00:09:53.880 { 00:09:53.880 "name": "BaseBdev2", 00:09:53.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.880 "is_configured": false, 00:09:53.880 "data_offset": 0, 00:09:53.880 "data_size": 0 00:09:53.880 }, 00:09:53.880 { 00:09:53.880 "name": "BaseBdev3", 00:09:53.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.880 "is_configured": false, 00:09:53.880 "data_offset": 0, 00:09:53.880 "data_size": 0 00:09:53.880 }, 00:09:53.880 { 00:09:53.880 "name": "BaseBdev4", 00:09:53.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.880 "is_configured": false, 00:09:53.880 "data_offset": 0, 00:09:53.880 "data_size": 0 00:09:53.880 } 00:09:53.880 ] 00:09:53.880 }' 00:09:53.880 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.880 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.140 [2024-11-18 10:38:19.975050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.140 [2024-11-18 10:38:19.975145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.140 [2024-11-18 10:38:19.987120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.140 [2024-11-18 10:38:19.989194] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.140 [2024-11-18 10:38:19.989267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.140 [2024-11-18 10:38:19.989295] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.140 [2024-11-18 10:38:19.989319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.140 [2024-11-18 10:38:19.989338] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.140 [2024-11-18 10:38:19.989358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.140 10:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.140 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.140 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.140 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.400 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.400 "name": "Existed_Raid", 00:09:54.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.400 "strip_size_kb": 64, 00:09:54.400 "state": "configuring", 00:09:54.400 "raid_level": "raid0", 00:09:54.400 "superblock": false, 00:09:54.400 "num_base_bdevs": 4, 00:09:54.400 "num_base_bdevs_discovered": 1, 00:09:54.400 "num_base_bdevs_operational": 4, 00:09:54.400 "base_bdevs_list": [ 00:09:54.400 { 00:09:54.400 "name": "BaseBdev1", 00:09:54.400 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:54.400 "is_configured": true, 00:09:54.400 "data_offset": 0, 00:09:54.400 "data_size": 65536 00:09:54.400 }, 00:09:54.400 { 00:09:54.400 "name": "BaseBdev2", 00:09:54.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.400 "is_configured": false, 00:09:54.400 "data_offset": 0, 00:09:54.400 "data_size": 0 00:09:54.400 }, 00:09:54.400 { 00:09:54.400 "name": "BaseBdev3", 00:09:54.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.400 "is_configured": false, 00:09:54.400 "data_offset": 0, 00:09:54.400 "data_size": 0 00:09:54.400 }, 00:09:54.400 { 00:09:54.400 "name": "BaseBdev4", 00:09:54.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.400 "is_configured": false, 00:09:54.400 "data_offset": 0, 00:09:54.400 "data_size": 0 00:09:54.400 } 00:09:54.400 ] 00:09:54.400 }' 00:09:54.400 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.400 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.660 [2024-11-18 10:38:20.496705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.660 BaseBdev2 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.660 [ 00:09:54.660 { 00:09:54.660 "name": "BaseBdev2", 00:09:54.660 "aliases": [ 00:09:54.660 "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735" 00:09:54.660 ], 00:09:54.660 "product_name": "Malloc disk", 00:09:54.660 "block_size": 512, 00:09:54.660 "num_blocks": 65536, 00:09:54.660 "uuid": "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735", 00:09:54.660 "assigned_rate_limits": { 00:09:54.660 "rw_ios_per_sec": 0, 00:09:54.660 "rw_mbytes_per_sec": 0, 00:09:54.660 "r_mbytes_per_sec": 0, 00:09:54.660 "w_mbytes_per_sec": 0 00:09:54.660 }, 00:09:54.660 "claimed": true, 00:09:54.660 "claim_type": "exclusive_write", 00:09:54.660 "zoned": false, 00:09:54.660 "supported_io_types": { 00:09:54.660 "read": true, 00:09:54.660 "write": true, 00:09:54.660 "unmap": true, 00:09:54.660 "flush": true, 00:09:54.660 "reset": true, 00:09:54.660 "nvme_admin": false, 00:09:54.660 "nvme_io": false, 00:09:54.660 "nvme_io_md": false, 00:09:54.660 "write_zeroes": true, 00:09:54.660 "zcopy": true, 00:09:54.660 "get_zone_info": false, 00:09:54.660 "zone_management": false, 00:09:54.660 "zone_append": false, 00:09:54.660 "compare": false, 00:09:54.660 "compare_and_write": false, 00:09:54.660 "abort": true, 00:09:54.660 "seek_hole": false, 00:09:54.660 "seek_data": false, 00:09:54.660 "copy": true, 00:09:54.660 "nvme_iov_md": false 00:09:54.660 }, 00:09:54.660 "memory_domains": [ 00:09:54.660 { 00:09:54.660 "dma_device_id": "system", 00:09:54.660 "dma_device_type": 1 00:09:54.660 }, 00:09:54.660 { 00:09:54.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.660 "dma_device_type": 2 00:09:54.660 } 00:09:54.660 ], 00:09:54.660 "driver_specific": {} 00:09:54.660 } 00:09:54.660 ] 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.660 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.661 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.661 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.661 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.661 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.661 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.920 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.920 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.920 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.920 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.920 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.920 "name": "Existed_Raid", 00:09:54.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.920 "strip_size_kb": 64, 00:09:54.920 "state": "configuring", 00:09:54.920 "raid_level": "raid0", 00:09:54.920 "superblock": false, 00:09:54.920 "num_base_bdevs": 4, 00:09:54.920 "num_base_bdevs_discovered": 2, 00:09:54.920 "num_base_bdevs_operational": 4, 00:09:54.920 "base_bdevs_list": [ 00:09:54.920 { 00:09:54.921 "name": "BaseBdev1", 00:09:54.921 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:54.921 "is_configured": true, 00:09:54.921 "data_offset": 0, 00:09:54.921 "data_size": 65536 00:09:54.921 }, 00:09:54.921 { 00:09:54.921 "name": "BaseBdev2", 00:09:54.921 "uuid": "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735", 00:09:54.921 "is_configured": true, 00:09:54.921 "data_offset": 0, 00:09:54.921 "data_size": 65536 00:09:54.921 }, 00:09:54.921 { 00:09:54.921 "name": "BaseBdev3", 00:09:54.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.921 "is_configured": false, 00:09:54.921 "data_offset": 0, 00:09:54.921 "data_size": 0 00:09:54.921 }, 00:09:54.921 { 00:09:54.921 "name": "BaseBdev4", 00:09:54.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.921 "is_configured": false, 00:09:54.921 "data_offset": 0, 00:09:54.921 "data_size": 0 00:09:54.921 } 00:09:54.921 ] 00:09:54.921 }' 00:09:54.921 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.921 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.180 10:38:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.181 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.181 10:38:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.181 [2024-11-18 10:38:21.054293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.181 BaseBdev3 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.181 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.441 [ 00:09:55.441 { 00:09:55.441 "name": "BaseBdev3", 00:09:55.441 "aliases": [ 00:09:55.441 "2142bc11-a8c8-4443-834b-36f65a2c1585" 00:09:55.441 ], 00:09:55.441 "product_name": "Malloc disk", 00:09:55.441 "block_size": 512, 00:09:55.441 "num_blocks": 65536, 00:09:55.441 "uuid": "2142bc11-a8c8-4443-834b-36f65a2c1585", 00:09:55.441 "assigned_rate_limits": { 00:09:55.441 "rw_ios_per_sec": 0, 00:09:55.441 "rw_mbytes_per_sec": 0, 00:09:55.441 "r_mbytes_per_sec": 0, 00:09:55.441 "w_mbytes_per_sec": 0 00:09:55.441 }, 00:09:55.441 "claimed": true, 00:09:55.441 "claim_type": "exclusive_write", 00:09:55.441 "zoned": false, 00:09:55.441 "supported_io_types": { 00:09:55.441 "read": true, 00:09:55.441 "write": true, 00:09:55.441 "unmap": true, 00:09:55.441 "flush": true, 00:09:55.441 "reset": true, 00:09:55.441 "nvme_admin": false, 00:09:55.441 "nvme_io": false, 00:09:55.441 "nvme_io_md": false, 00:09:55.441 "write_zeroes": true, 00:09:55.441 "zcopy": true, 00:09:55.441 "get_zone_info": false, 00:09:55.441 "zone_management": false, 00:09:55.441 "zone_append": false, 00:09:55.441 "compare": false, 00:09:55.441 "compare_and_write": false, 00:09:55.441 "abort": true, 00:09:55.441 "seek_hole": false, 00:09:55.441 "seek_data": false, 00:09:55.441 "copy": true, 00:09:55.441 "nvme_iov_md": false 00:09:55.441 }, 00:09:55.441 "memory_domains": [ 00:09:55.441 { 00:09:55.441 "dma_device_id": "system", 00:09:55.441 "dma_device_type": 1 00:09:55.441 }, 00:09:55.441 { 00:09:55.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.441 "dma_device_type": 2 00:09:55.441 } 00:09:55.441 ], 00:09:55.441 "driver_specific": {} 00:09:55.441 } 00:09:55.441 ] 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.441 "name": "Existed_Raid", 00:09:55.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.441 "strip_size_kb": 64, 00:09:55.441 "state": "configuring", 00:09:55.441 "raid_level": "raid0", 00:09:55.441 "superblock": false, 00:09:55.441 "num_base_bdevs": 4, 00:09:55.441 "num_base_bdevs_discovered": 3, 00:09:55.441 "num_base_bdevs_operational": 4, 00:09:55.441 "base_bdevs_list": [ 00:09:55.441 { 00:09:55.441 "name": "BaseBdev1", 00:09:55.441 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:55.441 "is_configured": true, 00:09:55.441 "data_offset": 0, 00:09:55.441 "data_size": 65536 00:09:55.441 }, 00:09:55.441 { 00:09:55.441 "name": "BaseBdev2", 00:09:55.441 "uuid": "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735", 00:09:55.441 "is_configured": true, 00:09:55.441 "data_offset": 0, 00:09:55.441 "data_size": 65536 00:09:55.441 }, 00:09:55.441 { 00:09:55.441 "name": "BaseBdev3", 00:09:55.441 "uuid": "2142bc11-a8c8-4443-834b-36f65a2c1585", 00:09:55.441 "is_configured": true, 00:09:55.441 "data_offset": 0, 00:09:55.441 "data_size": 65536 00:09:55.441 }, 00:09:55.441 { 00:09:55.441 "name": "BaseBdev4", 00:09:55.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.441 "is_configured": false, 00:09:55.441 "data_offset": 0, 00:09:55.441 "data_size": 0 00:09:55.441 } 00:09:55.441 ] 00:09:55.441 }' 00:09:55.441 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.442 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.701 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:55.701 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.701 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.962 [2024-11-18 10:38:21.608371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.962 [2024-11-18 10:38:21.608492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.962 [2024-11-18 10:38:21.608509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:55.962 [2024-11-18 10:38:21.608816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:55.962 [2024-11-18 10:38:21.608990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.962 [2024-11-18 10:38:21.609002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:55.962 [2024-11-18 10:38:21.609316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.962 BaseBdev4 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.962 [ 00:09:55.962 { 00:09:55.962 "name": "BaseBdev4", 00:09:55.962 "aliases": [ 00:09:55.962 "f3b4e94e-7db2-4a0c-becf-e220dd356e41" 00:09:55.962 ], 00:09:55.962 "product_name": "Malloc disk", 00:09:55.962 "block_size": 512, 00:09:55.962 "num_blocks": 65536, 00:09:55.962 "uuid": "f3b4e94e-7db2-4a0c-becf-e220dd356e41", 00:09:55.962 "assigned_rate_limits": { 00:09:55.962 "rw_ios_per_sec": 0, 00:09:55.962 "rw_mbytes_per_sec": 0, 00:09:55.962 "r_mbytes_per_sec": 0, 00:09:55.962 "w_mbytes_per_sec": 0 00:09:55.962 }, 00:09:55.962 "claimed": true, 00:09:55.962 "claim_type": "exclusive_write", 00:09:55.962 "zoned": false, 00:09:55.962 "supported_io_types": { 00:09:55.962 "read": true, 00:09:55.962 "write": true, 00:09:55.962 "unmap": true, 00:09:55.962 "flush": true, 00:09:55.962 "reset": true, 00:09:55.962 "nvme_admin": false, 00:09:55.962 "nvme_io": false, 00:09:55.962 "nvme_io_md": false, 00:09:55.962 "write_zeroes": true, 00:09:55.962 "zcopy": true, 00:09:55.962 "get_zone_info": false, 00:09:55.962 "zone_management": false, 00:09:55.962 "zone_append": false, 00:09:55.962 "compare": false, 00:09:55.962 "compare_and_write": false, 00:09:55.962 "abort": true, 00:09:55.962 "seek_hole": false, 00:09:55.962 "seek_data": false, 00:09:55.962 "copy": true, 00:09:55.962 "nvme_iov_md": false 00:09:55.962 }, 00:09:55.962 "memory_domains": [ 00:09:55.962 { 00:09:55.962 "dma_device_id": "system", 00:09:55.962 "dma_device_type": 1 00:09:55.962 }, 00:09:55.962 { 00:09:55.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.962 "dma_device_type": 2 00:09:55.962 } 00:09:55.962 ], 00:09:55.962 "driver_specific": {} 00:09:55.962 } 00:09:55.962 ] 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.962 "name": "Existed_Raid", 00:09:55.962 "uuid": "c3455387-2aec-4d7c-9856-7e8793503efd", 00:09:55.962 "strip_size_kb": 64, 00:09:55.962 "state": "online", 00:09:55.962 "raid_level": "raid0", 00:09:55.962 "superblock": false, 00:09:55.962 "num_base_bdevs": 4, 00:09:55.962 "num_base_bdevs_discovered": 4, 00:09:55.962 "num_base_bdevs_operational": 4, 00:09:55.962 "base_bdevs_list": [ 00:09:55.962 { 00:09:55.962 "name": "BaseBdev1", 00:09:55.962 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:55.962 "is_configured": true, 00:09:55.962 "data_offset": 0, 00:09:55.962 "data_size": 65536 00:09:55.962 }, 00:09:55.962 { 00:09:55.962 "name": "BaseBdev2", 00:09:55.962 "uuid": "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735", 00:09:55.962 "is_configured": true, 00:09:55.962 "data_offset": 0, 00:09:55.962 "data_size": 65536 00:09:55.962 }, 00:09:55.962 { 00:09:55.962 "name": "BaseBdev3", 00:09:55.962 "uuid": "2142bc11-a8c8-4443-834b-36f65a2c1585", 00:09:55.962 "is_configured": true, 00:09:55.962 "data_offset": 0, 00:09:55.962 "data_size": 65536 00:09:55.962 }, 00:09:55.962 { 00:09:55.962 "name": "BaseBdev4", 00:09:55.962 "uuid": "f3b4e94e-7db2-4a0c-becf-e220dd356e41", 00:09:55.962 "is_configured": true, 00:09:55.962 "data_offset": 0, 00:09:55.962 "data_size": 65536 00:09:55.962 } 00:09:55.962 ] 00:09:55.962 }' 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.962 10:38:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.223 [2024-11-18 10:38:22.047993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.223 "name": "Existed_Raid", 00:09:56.223 "aliases": [ 00:09:56.223 "c3455387-2aec-4d7c-9856-7e8793503efd" 00:09:56.223 ], 00:09:56.223 "product_name": "Raid Volume", 00:09:56.223 "block_size": 512, 00:09:56.223 "num_blocks": 262144, 00:09:56.223 "uuid": "c3455387-2aec-4d7c-9856-7e8793503efd", 00:09:56.223 "assigned_rate_limits": { 00:09:56.223 "rw_ios_per_sec": 0, 00:09:56.223 "rw_mbytes_per_sec": 0, 00:09:56.223 "r_mbytes_per_sec": 0, 00:09:56.223 "w_mbytes_per_sec": 0 00:09:56.223 }, 00:09:56.223 "claimed": false, 00:09:56.223 "zoned": false, 00:09:56.223 "supported_io_types": { 00:09:56.223 "read": true, 00:09:56.223 "write": true, 00:09:56.223 "unmap": true, 00:09:56.223 "flush": true, 00:09:56.223 "reset": true, 00:09:56.223 "nvme_admin": false, 00:09:56.223 "nvme_io": false, 00:09:56.223 "nvme_io_md": false, 00:09:56.223 "write_zeroes": true, 00:09:56.223 "zcopy": false, 00:09:56.223 "get_zone_info": false, 00:09:56.223 "zone_management": false, 00:09:56.223 "zone_append": false, 00:09:56.223 "compare": false, 00:09:56.223 "compare_and_write": false, 00:09:56.223 "abort": false, 00:09:56.223 "seek_hole": false, 00:09:56.223 "seek_data": false, 00:09:56.223 "copy": false, 00:09:56.223 "nvme_iov_md": false 00:09:56.223 }, 00:09:56.223 "memory_domains": [ 00:09:56.223 { 00:09:56.223 "dma_device_id": "system", 00:09:56.223 "dma_device_type": 1 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.223 "dma_device_type": 2 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "system", 00:09:56.223 "dma_device_type": 1 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.223 "dma_device_type": 2 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "system", 00:09:56.223 "dma_device_type": 1 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.223 "dma_device_type": 2 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "system", 00:09:56.223 "dma_device_type": 1 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.223 "dma_device_type": 2 00:09:56.223 } 00:09:56.223 ], 00:09:56.223 "driver_specific": { 00:09:56.223 "raid": { 00:09:56.223 "uuid": "c3455387-2aec-4d7c-9856-7e8793503efd", 00:09:56.223 "strip_size_kb": 64, 00:09:56.223 "state": "online", 00:09:56.223 "raid_level": "raid0", 00:09:56.223 "superblock": false, 00:09:56.223 "num_base_bdevs": 4, 00:09:56.223 "num_base_bdevs_discovered": 4, 00:09:56.223 "num_base_bdevs_operational": 4, 00:09:56.223 "base_bdevs_list": [ 00:09:56.223 { 00:09:56.223 "name": "BaseBdev1", 00:09:56.223 "uuid": "7425183f-4d18-499d-9557-c1927d472180", 00:09:56.223 "is_configured": true, 00:09:56.223 "data_offset": 0, 00:09:56.223 "data_size": 65536 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "name": "BaseBdev2", 00:09:56.223 "uuid": "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735", 00:09:56.223 "is_configured": true, 00:09:56.223 "data_offset": 0, 00:09:56.223 "data_size": 65536 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "name": "BaseBdev3", 00:09:56.223 "uuid": "2142bc11-a8c8-4443-834b-36f65a2c1585", 00:09:56.223 "is_configured": true, 00:09:56.223 "data_offset": 0, 00:09:56.223 "data_size": 65536 00:09:56.223 }, 00:09:56.223 { 00:09:56.223 "name": "BaseBdev4", 00:09:56.223 "uuid": "f3b4e94e-7db2-4a0c-becf-e220dd356e41", 00:09:56.223 "is_configured": true, 00:09:56.223 "data_offset": 0, 00:09:56.223 "data_size": 65536 00:09:56.223 } 00:09:56.223 ] 00:09:56.223 } 00:09:56.223 } 00:09:56.223 }' 00:09:56.223 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.483 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.483 BaseBdev2 00:09:56.483 BaseBdev3 00:09:56.483 BaseBdev4' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.484 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.746 [2024-11-18 10:38:22.371167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.746 [2024-11-18 10:38:22.371212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.746 [2024-11-18 10:38:22.371264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.746 "name": "Existed_Raid", 00:09:56.746 "uuid": "c3455387-2aec-4d7c-9856-7e8793503efd", 00:09:56.746 "strip_size_kb": 64, 00:09:56.746 "state": "offline", 00:09:56.746 "raid_level": "raid0", 00:09:56.746 "superblock": false, 00:09:56.746 "num_base_bdevs": 4, 00:09:56.746 "num_base_bdevs_discovered": 3, 00:09:56.746 "num_base_bdevs_operational": 3, 00:09:56.746 "base_bdevs_list": [ 00:09:56.746 { 00:09:56.746 "name": null, 00:09:56.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.746 "is_configured": false, 00:09:56.746 "data_offset": 0, 00:09:56.746 "data_size": 65536 00:09:56.746 }, 00:09:56.746 { 00:09:56.746 "name": "BaseBdev2", 00:09:56.746 "uuid": "ffa76bb4-21eb-4ab7-b5ae-c6e1e9b51735", 00:09:56.746 "is_configured": true, 00:09:56.746 "data_offset": 0, 00:09:56.746 "data_size": 65536 00:09:56.746 }, 00:09:56.746 { 00:09:56.746 "name": "BaseBdev3", 00:09:56.746 "uuid": "2142bc11-a8c8-4443-834b-36f65a2c1585", 00:09:56.746 "is_configured": true, 00:09:56.746 "data_offset": 0, 00:09:56.746 "data_size": 65536 00:09:56.746 }, 00:09:56.746 { 00:09:56.746 "name": "BaseBdev4", 00:09:56.746 "uuid": "f3b4e94e-7db2-4a0c-becf-e220dd356e41", 00:09:56.746 "is_configured": true, 00:09:56.746 "data_offset": 0, 00:09:56.746 "data_size": 65536 00:09:56.746 } 00:09:56.746 ] 00:09:56.746 }' 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.746 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.004 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:57.004 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.004 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.004 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.004 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.004 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.264 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.264 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.264 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.264 10:38:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:57.264 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.264 10:38:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.264 [2024-11-18 10:38:22.907146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.264 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.264 [2024-11-18 10:38:23.068181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.524 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.535 [2024-11-18 10:38:23.226712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:57.535 [2024-11-18 10:38:23.226825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.535 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.796 BaseBdev2 00:09:57.796 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.796 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 [ 00:09:57.797 { 00:09:57.797 "name": "BaseBdev2", 00:09:57.797 "aliases": [ 00:09:57.797 "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3" 00:09:57.797 ], 00:09:57.797 "product_name": "Malloc disk", 00:09:57.797 "block_size": 512, 00:09:57.797 "num_blocks": 65536, 00:09:57.797 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:09:57.797 "assigned_rate_limits": { 00:09:57.797 "rw_ios_per_sec": 0, 00:09:57.797 "rw_mbytes_per_sec": 0, 00:09:57.797 "r_mbytes_per_sec": 0, 00:09:57.797 "w_mbytes_per_sec": 0 00:09:57.797 }, 00:09:57.797 "claimed": false, 00:09:57.797 "zoned": false, 00:09:57.797 "supported_io_types": { 00:09:57.797 "read": true, 00:09:57.797 "write": true, 00:09:57.797 "unmap": true, 00:09:57.797 "flush": true, 00:09:57.797 "reset": true, 00:09:57.797 "nvme_admin": false, 00:09:57.797 "nvme_io": false, 00:09:57.797 "nvme_io_md": false, 00:09:57.797 "write_zeroes": true, 00:09:57.797 "zcopy": true, 00:09:57.797 "get_zone_info": false, 00:09:57.797 "zone_management": false, 00:09:57.797 "zone_append": false, 00:09:57.797 "compare": false, 00:09:57.797 "compare_and_write": false, 00:09:57.797 "abort": true, 00:09:57.797 "seek_hole": false, 00:09:57.797 "seek_data": false, 00:09:57.797 "copy": true, 00:09:57.797 "nvme_iov_md": false 00:09:57.797 }, 00:09:57.797 "memory_domains": [ 00:09:57.797 { 00:09:57.797 "dma_device_id": "system", 00:09:57.797 "dma_device_type": 1 00:09:57.797 }, 00:09:57.797 { 00:09:57.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.797 "dma_device_type": 2 00:09:57.797 } 00:09:57.797 ], 00:09:57.797 "driver_specific": {} 00:09:57.797 } 00:09:57.797 ] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 BaseBdev3 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 [ 00:09:57.797 { 00:09:57.797 "name": "BaseBdev3", 00:09:57.797 "aliases": [ 00:09:57.797 "ff37b53a-ae87-406f-aaa5-8af206cd927e" 00:09:57.797 ], 00:09:57.797 "product_name": "Malloc disk", 00:09:57.797 "block_size": 512, 00:09:57.797 "num_blocks": 65536, 00:09:57.797 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:09:57.797 "assigned_rate_limits": { 00:09:57.797 "rw_ios_per_sec": 0, 00:09:57.797 "rw_mbytes_per_sec": 0, 00:09:57.797 "r_mbytes_per_sec": 0, 00:09:57.797 "w_mbytes_per_sec": 0 00:09:57.797 }, 00:09:57.797 "claimed": false, 00:09:57.797 "zoned": false, 00:09:57.797 "supported_io_types": { 00:09:57.797 "read": true, 00:09:57.797 "write": true, 00:09:57.797 "unmap": true, 00:09:57.797 "flush": true, 00:09:57.797 "reset": true, 00:09:57.797 "nvme_admin": false, 00:09:57.797 "nvme_io": false, 00:09:57.797 "nvme_io_md": false, 00:09:57.797 "write_zeroes": true, 00:09:57.797 "zcopy": true, 00:09:57.797 "get_zone_info": false, 00:09:57.797 "zone_management": false, 00:09:57.797 "zone_append": false, 00:09:57.797 "compare": false, 00:09:57.797 "compare_and_write": false, 00:09:57.797 "abort": true, 00:09:57.797 "seek_hole": false, 00:09:57.797 "seek_data": false, 00:09:57.797 "copy": true, 00:09:57.797 "nvme_iov_md": false 00:09:57.797 }, 00:09:57.797 "memory_domains": [ 00:09:57.797 { 00:09:57.797 "dma_device_id": "system", 00:09:57.797 "dma_device_type": 1 00:09:57.797 }, 00:09:57.797 { 00:09:57.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.797 "dma_device_type": 2 00:09:57.797 } 00:09:57.797 ], 00:09:57.797 "driver_specific": {} 00:09:57.797 } 00:09:57.797 ] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 BaseBdev4 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.797 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.797 [ 00:09:57.797 { 00:09:57.797 "name": "BaseBdev4", 00:09:57.797 "aliases": [ 00:09:57.797 "264ccea6-068e-455d-b1b4-713dbaa4c529" 00:09:57.797 ], 00:09:57.797 "product_name": "Malloc disk", 00:09:57.797 "block_size": 512, 00:09:57.797 "num_blocks": 65536, 00:09:57.797 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:09:57.797 "assigned_rate_limits": { 00:09:57.797 "rw_ios_per_sec": 0, 00:09:57.797 "rw_mbytes_per_sec": 0, 00:09:57.797 "r_mbytes_per_sec": 0, 00:09:57.797 "w_mbytes_per_sec": 0 00:09:57.797 }, 00:09:57.797 "claimed": false, 00:09:57.797 "zoned": false, 00:09:57.797 "supported_io_types": { 00:09:57.797 "read": true, 00:09:57.797 "write": true, 00:09:57.797 "unmap": true, 00:09:57.797 "flush": true, 00:09:57.797 "reset": true, 00:09:57.797 "nvme_admin": false, 00:09:57.797 "nvme_io": false, 00:09:57.797 "nvme_io_md": false, 00:09:57.797 "write_zeroes": true, 00:09:57.797 "zcopy": true, 00:09:57.797 "get_zone_info": false, 00:09:57.797 "zone_management": false, 00:09:57.797 "zone_append": false, 00:09:57.797 "compare": false, 00:09:57.798 "compare_and_write": false, 00:09:57.798 "abort": true, 00:09:57.798 "seek_hole": false, 00:09:57.798 "seek_data": false, 00:09:57.798 "copy": true, 00:09:57.798 "nvme_iov_md": false 00:09:57.798 }, 00:09:57.798 "memory_domains": [ 00:09:57.798 { 00:09:57.798 "dma_device_id": "system", 00:09:57.798 "dma_device_type": 1 00:09:57.798 }, 00:09:57.798 { 00:09:57.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.798 "dma_device_type": 2 00:09:57.798 } 00:09:57.798 ], 00:09:57.798 "driver_specific": {} 00:09:57.798 } 00:09:57.798 ] 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.798 [2024-11-18 10:38:23.640569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.798 [2024-11-18 10:38:23.640686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.798 [2024-11-18 10:38:23.640746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.798 [2024-11-18 10:38:23.642727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.798 [2024-11-18 10:38:23.642817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.798 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.058 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.058 "name": "Existed_Raid", 00:09:58.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.058 "strip_size_kb": 64, 00:09:58.058 "state": "configuring", 00:09:58.058 "raid_level": "raid0", 00:09:58.058 "superblock": false, 00:09:58.058 "num_base_bdevs": 4, 00:09:58.058 "num_base_bdevs_discovered": 3, 00:09:58.058 "num_base_bdevs_operational": 4, 00:09:58.058 "base_bdevs_list": [ 00:09:58.058 { 00:09:58.058 "name": "BaseBdev1", 00:09:58.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.058 "is_configured": false, 00:09:58.058 "data_offset": 0, 00:09:58.058 "data_size": 0 00:09:58.058 }, 00:09:58.058 { 00:09:58.058 "name": "BaseBdev2", 00:09:58.058 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:09:58.058 "is_configured": true, 00:09:58.058 "data_offset": 0, 00:09:58.058 "data_size": 65536 00:09:58.058 }, 00:09:58.058 { 00:09:58.058 "name": "BaseBdev3", 00:09:58.058 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:09:58.058 "is_configured": true, 00:09:58.058 "data_offset": 0, 00:09:58.058 "data_size": 65536 00:09:58.058 }, 00:09:58.058 { 00:09:58.058 "name": "BaseBdev4", 00:09:58.058 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:09:58.058 "is_configured": true, 00:09:58.058 "data_offset": 0, 00:09:58.058 "data_size": 65536 00:09:58.058 } 00:09:58.058 ] 00:09:58.058 }' 00:09:58.058 10:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.058 10:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.318 [2024-11-18 10:38:24.111759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.318 "name": "Existed_Raid", 00:09:58.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.318 "strip_size_kb": 64, 00:09:58.318 "state": "configuring", 00:09:58.318 "raid_level": "raid0", 00:09:58.318 "superblock": false, 00:09:58.318 "num_base_bdevs": 4, 00:09:58.318 "num_base_bdevs_discovered": 2, 00:09:58.318 "num_base_bdevs_operational": 4, 00:09:58.318 "base_bdevs_list": [ 00:09:58.318 { 00:09:58.318 "name": "BaseBdev1", 00:09:58.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.318 "is_configured": false, 00:09:58.318 "data_offset": 0, 00:09:58.318 "data_size": 0 00:09:58.318 }, 00:09:58.318 { 00:09:58.318 "name": null, 00:09:58.318 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:09:58.318 "is_configured": false, 00:09:58.318 "data_offset": 0, 00:09:58.318 "data_size": 65536 00:09:58.318 }, 00:09:58.318 { 00:09:58.318 "name": "BaseBdev3", 00:09:58.318 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:09:58.318 "is_configured": true, 00:09:58.318 "data_offset": 0, 00:09:58.318 "data_size": 65536 00:09:58.318 }, 00:09:58.318 { 00:09:58.318 "name": "BaseBdev4", 00:09:58.318 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:09:58.318 "is_configured": true, 00:09:58.318 "data_offset": 0, 00:09:58.318 "data_size": 65536 00:09:58.318 } 00:09:58.318 ] 00:09:58.318 }' 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.318 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 BaseBdev1 00:09:58.888 [2024-11-18 10:38:24.637573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.888 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.889 [ 00:09:58.889 { 00:09:58.889 "name": "BaseBdev1", 00:09:58.889 "aliases": [ 00:09:58.889 "bad506c2-a003-466a-9b12-b451d08afdc9" 00:09:58.889 ], 00:09:58.889 "product_name": "Malloc disk", 00:09:58.889 "block_size": 512, 00:09:58.889 "num_blocks": 65536, 00:09:58.889 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:09:58.889 "assigned_rate_limits": { 00:09:58.889 "rw_ios_per_sec": 0, 00:09:58.889 "rw_mbytes_per_sec": 0, 00:09:58.889 "r_mbytes_per_sec": 0, 00:09:58.889 "w_mbytes_per_sec": 0 00:09:58.889 }, 00:09:58.889 "claimed": true, 00:09:58.889 "claim_type": "exclusive_write", 00:09:58.889 "zoned": false, 00:09:58.889 "supported_io_types": { 00:09:58.889 "read": true, 00:09:58.889 "write": true, 00:09:58.889 "unmap": true, 00:09:58.889 "flush": true, 00:09:58.889 "reset": true, 00:09:58.889 "nvme_admin": false, 00:09:58.889 "nvme_io": false, 00:09:58.889 "nvme_io_md": false, 00:09:58.889 "write_zeroes": true, 00:09:58.889 "zcopy": true, 00:09:58.889 "get_zone_info": false, 00:09:58.889 "zone_management": false, 00:09:58.889 "zone_append": false, 00:09:58.889 "compare": false, 00:09:58.889 "compare_and_write": false, 00:09:58.889 "abort": true, 00:09:58.889 "seek_hole": false, 00:09:58.889 "seek_data": false, 00:09:58.889 "copy": true, 00:09:58.889 "nvme_iov_md": false 00:09:58.889 }, 00:09:58.889 "memory_domains": [ 00:09:58.889 { 00:09:58.889 "dma_device_id": "system", 00:09:58.889 "dma_device_type": 1 00:09:58.889 }, 00:09:58.889 { 00:09:58.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.889 "dma_device_type": 2 00:09:58.889 } 00:09:58.889 ], 00:09:58.889 "driver_specific": {} 00:09:58.889 } 00:09:58.889 ] 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.889 "name": "Existed_Raid", 00:09:58.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.889 "strip_size_kb": 64, 00:09:58.889 "state": "configuring", 00:09:58.889 "raid_level": "raid0", 00:09:58.889 "superblock": false, 00:09:58.889 "num_base_bdevs": 4, 00:09:58.889 "num_base_bdevs_discovered": 3, 00:09:58.889 "num_base_bdevs_operational": 4, 00:09:58.889 "base_bdevs_list": [ 00:09:58.889 { 00:09:58.889 "name": "BaseBdev1", 00:09:58.889 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:09:58.889 "is_configured": true, 00:09:58.889 "data_offset": 0, 00:09:58.889 "data_size": 65536 00:09:58.889 }, 00:09:58.889 { 00:09:58.889 "name": null, 00:09:58.889 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:09:58.889 "is_configured": false, 00:09:58.889 "data_offset": 0, 00:09:58.889 "data_size": 65536 00:09:58.889 }, 00:09:58.889 { 00:09:58.889 "name": "BaseBdev3", 00:09:58.889 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:09:58.889 "is_configured": true, 00:09:58.889 "data_offset": 0, 00:09:58.889 "data_size": 65536 00:09:58.889 }, 00:09:58.889 { 00:09:58.889 "name": "BaseBdev4", 00:09:58.889 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:09:58.889 "is_configured": true, 00:09:58.889 "data_offset": 0, 00:09:58.889 "data_size": 65536 00:09:58.889 } 00:09:58.889 ] 00:09:58.889 }' 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.889 10:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.149 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.149 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.149 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.149 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.149 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.410 [2024-11-18 10:38:25.056899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.410 "name": "Existed_Raid", 00:09:59.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.410 "strip_size_kb": 64, 00:09:59.410 "state": "configuring", 00:09:59.410 "raid_level": "raid0", 00:09:59.410 "superblock": false, 00:09:59.410 "num_base_bdevs": 4, 00:09:59.410 "num_base_bdevs_discovered": 2, 00:09:59.410 "num_base_bdevs_operational": 4, 00:09:59.410 "base_bdevs_list": [ 00:09:59.410 { 00:09:59.410 "name": "BaseBdev1", 00:09:59.410 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:09:59.410 "is_configured": true, 00:09:59.410 "data_offset": 0, 00:09:59.410 "data_size": 65536 00:09:59.410 }, 00:09:59.410 { 00:09:59.410 "name": null, 00:09:59.410 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:09:59.410 "is_configured": false, 00:09:59.410 "data_offset": 0, 00:09:59.410 "data_size": 65536 00:09:59.410 }, 00:09:59.410 { 00:09:59.410 "name": null, 00:09:59.410 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:09:59.410 "is_configured": false, 00:09:59.410 "data_offset": 0, 00:09:59.410 "data_size": 65536 00:09:59.410 }, 00:09:59.410 { 00:09:59.410 "name": "BaseBdev4", 00:09:59.410 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:09:59.410 "is_configured": true, 00:09:59.410 "data_offset": 0, 00:09:59.410 "data_size": 65536 00:09:59.410 } 00:09:59.410 ] 00:09:59.410 }' 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.410 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.691 [2024-11-18 10:38:25.480190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.691 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.691 "name": "Existed_Raid", 00:09:59.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.691 "strip_size_kb": 64, 00:09:59.691 "state": "configuring", 00:09:59.691 "raid_level": "raid0", 00:09:59.691 "superblock": false, 00:09:59.691 "num_base_bdevs": 4, 00:09:59.691 "num_base_bdevs_discovered": 3, 00:09:59.691 "num_base_bdevs_operational": 4, 00:09:59.691 "base_bdevs_list": [ 00:09:59.691 { 00:09:59.691 "name": "BaseBdev1", 00:09:59.691 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:09:59.691 "is_configured": true, 00:09:59.691 "data_offset": 0, 00:09:59.691 "data_size": 65536 00:09:59.691 }, 00:09:59.691 { 00:09:59.691 "name": null, 00:09:59.692 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:09:59.692 "is_configured": false, 00:09:59.692 "data_offset": 0, 00:09:59.692 "data_size": 65536 00:09:59.692 }, 00:09:59.692 { 00:09:59.692 "name": "BaseBdev3", 00:09:59.692 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:09:59.692 "is_configured": true, 00:09:59.692 "data_offset": 0, 00:09:59.692 "data_size": 65536 00:09:59.692 }, 00:09:59.692 { 00:09:59.692 "name": "BaseBdev4", 00:09:59.692 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:09:59.692 "is_configured": true, 00:09:59.692 "data_offset": 0, 00:09:59.692 "data_size": 65536 00:09:59.692 } 00:09:59.692 ] 00:09:59.692 }' 00:09:59.692 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.692 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.279 10:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.279 [2024-11-18 10:38:25.955380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.279 "name": "Existed_Raid", 00:10:00.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.279 "strip_size_kb": 64, 00:10:00.279 "state": "configuring", 00:10:00.279 "raid_level": "raid0", 00:10:00.279 "superblock": false, 00:10:00.279 "num_base_bdevs": 4, 00:10:00.279 "num_base_bdevs_discovered": 2, 00:10:00.279 "num_base_bdevs_operational": 4, 00:10:00.279 "base_bdevs_list": [ 00:10:00.279 { 00:10:00.279 "name": null, 00:10:00.279 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:10:00.279 "is_configured": false, 00:10:00.279 "data_offset": 0, 00:10:00.279 "data_size": 65536 00:10:00.279 }, 00:10:00.279 { 00:10:00.279 "name": null, 00:10:00.279 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:10:00.279 "is_configured": false, 00:10:00.279 "data_offset": 0, 00:10:00.279 "data_size": 65536 00:10:00.279 }, 00:10:00.279 { 00:10:00.279 "name": "BaseBdev3", 00:10:00.279 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:10:00.279 "is_configured": true, 00:10:00.279 "data_offset": 0, 00:10:00.279 "data_size": 65536 00:10:00.279 }, 00:10:00.279 { 00:10:00.279 "name": "BaseBdev4", 00:10:00.279 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:10:00.279 "is_configured": true, 00:10:00.279 "data_offset": 0, 00:10:00.279 "data_size": 65536 00:10:00.279 } 00:10:00.279 ] 00:10:00.279 }' 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.279 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.850 [2024-11-18 10:38:26.486458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.850 "name": "Existed_Raid", 00:10:00.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.850 "strip_size_kb": 64, 00:10:00.850 "state": "configuring", 00:10:00.850 "raid_level": "raid0", 00:10:00.850 "superblock": false, 00:10:00.850 "num_base_bdevs": 4, 00:10:00.850 "num_base_bdevs_discovered": 3, 00:10:00.850 "num_base_bdevs_operational": 4, 00:10:00.850 "base_bdevs_list": [ 00:10:00.850 { 00:10:00.850 "name": null, 00:10:00.850 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:10:00.850 "is_configured": false, 00:10:00.850 "data_offset": 0, 00:10:00.850 "data_size": 65536 00:10:00.850 }, 00:10:00.850 { 00:10:00.850 "name": "BaseBdev2", 00:10:00.850 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:10:00.850 "is_configured": true, 00:10:00.850 "data_offset": 0, 00:10:00.850 "data_size": 65536 00:10:00.850 }, 00:10:00.850 { 00:10:00.850 "name": "BaseBdev3", 00:10:00.850 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:10:00.850 "is_configured": true, 00:10:00.850 "data_offset": 0, 00:10:00.850 "data_size": 65536 00:10:00.850 }, 00:10:00.850 { 00:10:00.850 "name": "BaseBdev4", 00:10:00.850 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:10:00.850 "is_configured": true, 00:10:00.850 "data_offset": 0, 00:10:00.850 "data_size": 65536 00:10:00.850 } 00:10:00.850 ] 00:10:00.850 }' 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.850 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bad506c2-a003-466a-9b12-b451d08afdc9 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.111 10:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.371 [2024-11-18 10:38:26.999820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:01.371 [2024-11-18 10:38:26.999873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.371 [2024-11-18 10:38:26.999881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:01.371 [2024-11-18 10:38:27.000205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:01.371 [2024-11-18 10:38:27.000384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.371 [2024-11-18 10:38:27.000398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:01.371 [2024-11-18 10:38:27.000654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.371 NewBaseBdev 00:10:01.371 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.371 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:01.371 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:01.371 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.371 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.371 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 [ 00:10:01.372 { 00:10:01.372 "name": "NewBaseBdev", 00:10:01.372 "aliases": [ 00:10:01.372 "bad506c2-a003-466a-9b12-b451d08afdc9" 00:10:01.372 ], 00:10:01.372 "product_name": "Malloc disk", 00:10:01.372 "block_size": 512, 00:10:01.372 "num_blocks": 65536, 00:10:01.372 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:10:01.372 "assigned_rate_limits": { 00:10:01.372 "rw_ios_per_sec": 0, 00:10:01.372 "rw_mbytes_per_sec": 0, 00:10:01.372 "r_mbytes_per_sec": 0, 00:10:01.372 "w_mbytes_per_sec": 0 00:10:01.372 }, 00:10:01.372 "claimed": true, 00:10:01.372 "claim_type": "exclusive_write", 00:10:01.372 "zoned": false, 00:10:01.372 "supported_io_types": { 00:10:01.372 "read": true, 00:10:01.372 "write": true, 00:10:01.372 "unmap": true, 00:10:01.372 "flush": true, 00:10:01.372 "reset": true, 00:10:01.372 "nvme_admin": false, 00:10:01.372 "nvme_io": false, 00:10:01.372 "nvme_io_md": false, 00:10:01.372 "write_zeroes": true, 00:10:01.372 "zcopy": true, 00:10:01.372 "get_zone_info": false, 00:10:01.372 "zone_management": false, 00:10:01.372 "zone_append": false, 00:10:01.372 "compare": false, 00:10:01.372 "compare_and_write": false, 00:10:01.372 "abort": true, 00:10:01.372 "seek_hole": false, 00:10:01.372 "seek_data": false, 00:10:01.372 "copy": true, 00:10:01.372 "nvme_iov_md": false 00:10:01.372 }, 00:10:01.372 "memory_domains": [ 00:10:01.372 { 00:10:01.372 "dma_device_id": "system", 00:10:01.372 "dma_device_type": 1 00:10:01.372 }, 00:10:01.372 { 00:10:01.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.372 "dma_device_type": 2 00:10:01.372 } 00:10:01.372 ], 00:10:01.372 "driver_specific": {} 00:10:01.372 } 00:10:01.372 ] 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.372 "name": "Existed_Raid", 00:10:01.372 "uuid": "d796defc-8815-4b67-8a3e-c8533d8ca35b", 00:10:01.372 "strip_size_kb": 64, 00:10:01.372 "state": "online", 00:10:01.372 "raid_level": "raid0", 00:10:01.372 "superblock": false, 00:10:01.372 "num_base_bdevs": 4, 00:10:01.372 "num_base_bdevs_discovered": 4, 00:10:01.372 "num_base_bdevs_operational": 4, 00:10:01.372 "base_bdevs_list": [ 00:10:01.372 { 00:10:01.372 "name": "NewBaseBdev", 00:10:01.372 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:10:01.372 "is_configured": true, 00:10:01.372 "data_offset": 0, 00:10:01.372 "data_size": 65536 00:10:01.372 }, 00:10:01.372 { 00:10:01.372 "name": "BaseBdev2", 00:10:01.372 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:10:01.372 "is_configured": true, 00:10:01.372 "data_offset": 0, 00:10:01.372 "data_size": 65536 00:10:01.372 }, 00:10:01.372 { 00:10:01.372 "name": "BaseBdev3", 00:10:01.372 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:10:01.372 "is_configured": true, 00:10:01.372 "data_offset": 0, 00:10:01.372 "data_size": 65536 00:10:01.372 }, 00:10:01.372 { 00:10:01.372 "name": "BaseBdev4", 00:10:01.372 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:10:01.372 "is_configured": true, 00:10:01.372 "data_offset": 0, 00:10:01.372 "data_size": 65536 00:10:01.372 } 00:10:01.372 ] 00:10:01.372 }' 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.372 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.632 [2024-11-18 10:38:27.487446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.632 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.892 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.892 "name": "Existed_Raid", 00:10:01.892 "aliases": [ 00:10:01.892 "d796defc-8815-4b67-8a3e-c8533d8ca35b" 00:10:01.892 ], 00:10:01.892 "product_name": "Raid Volume", 00:10:01.892 "block_size": 512, 00:10:01.892 "num_blocks": 262144, 00:10:01.892 "uuid": "d796defc-8815-4b67-8a3e-c8533d8ca35b", 00:10:01.892 "assigned_rate_limits": { 00:10:01.892 "rw_ios_per_sec": 0, 00:10:01.892 "rw_mbytes_per_sec": 0, 00:10:01.892 "r_mbytes_per_sec": 0, 00:10:01.892 "w_mbytes_per_sec": 0 00:10:01.892 }, 00:10:01.892 "claimed": false, 00:10:01.892 "zoned": false, 00:10:01.892 "supported_io_types": { 00:10:01.892 "read": true, 00:10:01.892 "write": true, 00:10:01.892 "unmap": true, 00:10:01.892 "flush": true, 00:10:01.892 "reset": true, 00:10:01.892 "nvme_admin": false, 00:10:01.892 "nvme_io": false, 00:10:01.892 "nvme_io_md": false, 00:10:01.892 "write_zeroes": true, 00:10:01.892 "zcopy": false, 00:10:01.892 "get_zone_info": false, 00:10:01.892 "zone_management": false, 00:10:01.892 "zone_append": false, 00:10:01.892 "compare": false, 00:10:01.892 "compare_and_write": false, 00:10:01.892 "abort": false, 00:10:01.892 "seek_hole": false, 00:10:01.892 "seek_data": false, 00:10:01.892 "copy": false, 00:10:01.892 "nvme_iov_md": false 00:10:01.892 }, 00:10:01.892 "memory_domains": [ 00:10:01.892 { 00:10:01.892 "dma_device_id": "system", 00:10:01.892 "dma_device_type": 1 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.892 "dma_device_type": 2 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "system", 00:10:01.892 "dma_device_type": 1 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.892 "dma_device_type": 2 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "system", 00:10:01.892 "dma_device_type": 1 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.892 "dma_device_type": 2 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "system", 00:10:01.892 "dma_device_type": 1 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.892 "dma_device_type": 2 00:10:01.892 } 00:10:01.892 ], 00:10:01.892 "driver_specific": { 00:10:01.892 "raid": { 00:10:01.892 "uuid": "d796defc-8815-4b67-8a3e-c8533d8ca35b", 00:10:01.892 "strip_size_kb": 64, 00:10:01.892 "state": "online", 00:10:01.892 "raid_level": "raid0", 00:10:01.892 "superblock": false, 00:10:01.892 "num_base_bdevs": 4, 00:10:01.892 "num_base_bdevs_discovered": 4, 00:10:01.892 "num_base_bdevs_operational": 4, 00:10:01.892 "base_bdevs_list": [ 00:10:01.892 { 00:10:01.892 "name": "NewBaseBdev", 00:10:01.892 "uuid": "bad506c2-a003-466a-9b12-b451d08afdc9", 00:10:01.892 "is_configured": true, 00:10:01.892 "data_offset": 0, 00:10:01.892 "data_size": 65536 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "name": "BaseBdev2", 00:10:01.892 "uuid": "ed6d43d1-c6be-492d-bafc-6ef9e00a7bf3", 00:10:01.892 "is_configured": true, 00:10:01.892 "data_offset": 0, 00:10:01.892 "data_size": 65536 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "name": "BaseBdev3", 00:10:01.892 "uuid": "ff37b53a-ae87-406f-aaa5-8af206cd927e", 00:10:01.892 "is_configured": true, 00:10:01.892 "data_offset": 0, 00:10:01.892 "data_size": 65536 00:10:01.892 }, 00:10:01.892 { 00:10:01.892 "name": "BaseBdev4", 00:10:01.892 "uuid": "264ccea6-068e-455d-b1b4-713dbaa4c529", 00:10:01.892 "is_configured": true, 00:10:01.892 "data_offset": 0, 00:10:01.892 "data_size": 65536 00:10:01.892 } 00:10:01.892 ] 00:10:01.892 } 00:10:01.892 } 00:10:01.892 }' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:01.893 BaseBdev2 00:10:01.893 BaseBdev3 00:10:01.893 BaseBdev4' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.893 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.153 [2024-11-18 10:38:27.818506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.153 [2024-11-18 10:38:27.818579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.153 [2024-11-18 10:38:27.818671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.153 [2024-11-18 10:38:27.818756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.153 [2024-11-18 10:38:27.818810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69244 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69244 ']' 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69244 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69244 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.153 killing process with pid 69244 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69244' 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69244 00:10:02.153 [2024-11-18 10:38:27.853545] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.153 10:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69244 00:10:02.413 [2024-11-18 10:38:28.268462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.795 00:10:03.795 real 0m11.299s 00:10:03.795 user 0m17.639s 00:10:03.795 sys 0m2.167s 00:10:03.795 ************************************ 00:10:03.795 END TEST raid_state_function_test 00:10:03.795 ************************************ 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 10:38:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:03.795 10:38:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.795 10:38:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.795 10:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 ************************************ 00:10:03.795 START TEST raid_state_function_test_sb 00:10:03.795 ************************************ 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69915 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69915' 00:10:03.795 Process raid pid: 69915 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69915 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69915 ']' 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.795 10:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 [2024-11-18 10:38:29.597777] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:03.795 [2024-11-18 10:38:29.597901] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.054 [2024-11-18 10:38:29.778844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.054 [2024-11-18 10:38:29.909175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.314 [2024-11-18 10:38:30.135793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.314 [2024-11-18 10:38:30.135839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.573 [2024-11-18 10:38:30.423233] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.573 [2024-11-18 10:38:30.423290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.573 [2024-11-18 10:38:30.423300] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.573 [2024-11-18 10:38:30.423311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.573 [2024-11-18 10:38:30.423317] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.573 [2024-11-18 10:38:30.423327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.573 [2024-11-18 10:38:30.423332] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.573 [2024-11-18 10:38:30.423341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.573 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.574 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.833 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.833 "name": "Existed_Raid", 00:10:04.833 "uuid": "6705443a-fe67-4ae1-a4f5-3d0f1cee72c9", 00:10:04.833 "strip_size_kb": 64, 00:10:04.833 "state": "configuring", 00:10:04.833 "raid_level": "raid0", 00:10:04.833 "superblock": true, 00:10:04.833 "num_base_bdevs": 4, 00:10:04.833 "num_base_bdevs_discovered": 0, 00:10:04.833 "num_base_bdevs_operational": 4, 00:10:04.833 "base_bdevs_list": [ 00:10:04.833 { 00:10:04.833 "name": "BaseBdev1", 00:10:04.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.833 "is_configured": false, 00:10:04.833 "data_offset": 0, 00:10:04.833 "data_size": 0 00:10:04.833 }, 00:10:04.833 { 00:10:04.833 "name": "BaseBdev2", 00:10:04.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.833 "is_configured": false, 00:10:04.833 "data_offset": 0, 00:10:04.833 "data_size": 0 00:10:04.833 }, 00:10:04.833 { 00:10:04.833 "name": "BaseBdev3", 00:10:04.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.833 "is_configured": false, 00:10:04.833 "data_offset": 0, 00:10:04.833 "data_size": 0 00:10:04.833 }, 00:10:04.833 { 00:10:04.833 "name": "BaseBdev4", 00:10:04.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.833 "is_configured": false, 00:10:04.833 "data_offset": 0, 00:10:04.833 "data_size": 0 00:10:04.833 } 00:10:04.833 ] 00:10:04.833 }' 00:10:04.833 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.833 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.093 [2024-11-18 10:38:30.882317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.093 [2024-11-18 10:38:30.882429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.093 [2024-11-18 10:38:30.894622] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.093 [2024-11-18 10:38:30.894701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.093 [2024-11-18 10:38:30.894729] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.093 [2024-11-18 10:38:30.894752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.093 [2024-11-18 10:38:30.894769] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.093 [2024-11-18 10:38:30.894790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.093 [2024-11-18 10:38:30.894808] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.093 [2024-11-18 10:38:30.894829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.093 [2024-11-18 10:38:30.947832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.093 BaseBdev1 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.093 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.353 [ 00:10:05.353 { 00:10:05.353 "name": "BaseBdev1", 00:10:05.353 "aliases": [ 00:10:05.353 "05dcc249-7a68-464a-8073-ff245b3bc9f9" 00:10:05.353 ], 00:10:05.353 "product_name": "Malloc disk", 00:10:05.353 "block_size": 512, 00:10:05.353 "num_blocks": 65536, 00:10:05.353 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:05.353 "assigned_rate_limits": { 00:10:05.353 "rw_ios_per_sec": 0, 00:10:05.353 "rw_mbytes_per_sec": 0, 00:10:05.353 "r_mbytes_per_sec": 0, 00:10:05.353 "w_mbytes_per_sec": 0 00:10:05.353 }, 00:10:05.353 "claimed": true, 00:10:05.353 "claim_type": "exclusive_write", 00:10:05.353 "zoned": false, 00:10:05.353 "supported_io_types": { 00:10:05.353 "read": true, 00:10:05.353 "write": true, 00:10:05.353 "unmap": true, 00:10:05.353 "flush": true, 00:10:05.353 "reset": true, 00:10:05.353 "nvme_admin": false, 00:10:05.353 "nvme_io": false, 00:10:05.353 "nvme_io_md": false, 00:10:05.353 "write_zeroes": true, 00:10:05.353 "zcopy": true, 00:10:05.353 "get_zone_info": false, 00:10:05.353 "zone_management": false, 00:10:05.353 "zone_append": false, 00:10:05.353 "compare": false, 00:10:05.353 "compare_and_write": false, 00:10:05.353 "abort": true, 00:10:05.353 "seek_hole": false, 00:10:05.353 "seek_data": false, 00:10:05.353 "copy": true, 00:10:05.353 "nvme_iov_md": false 00:10:05.353 }, 00:10:05.353 "memory_domains": [ 00:10:05.353 { 00:10:05.353 "dma_device_id": "system", 00:10:05.353 "dma_device_type": 1 00:10:05.353 }, 00:10:05.353 { 00:10:05.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.353 "dma_device_type": 2 00:10:05.353 } 00:10:05.353 ], 00:10:05.353 "driver_specific": {} 00:10:05.353 } 00:10:05.353 ] 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.353 10:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.353 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.353 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.353 "name": "Existed_Raid", 00:10:05.353 "uuid": "1ae1b5fc-f37d-4ee0-a5eb-abcdb4191f8f", 00:10:05.353 "strip_size_kb": 64, 00:10:05.353 "state": "configuring", 00:10:05.353 "raid_level": "raid0", 00:10:05.353 "superblock": true, 00:10:05.353 "num_base_bdevs": 4, 00:10:05.353 "num_base_bdevs_discovered": 1, 00:10:05.353 "num_base_bdevs_operational": 4, 00:10:05.353 "base_bdevs_list": [ 00:10:05.353 { 00:10:05.353 "name": "BaseBdev1", 00:10:05.353 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:05.353 "is_configured": true, 00:10:05.353 "data_offset": 2048, 00:10:05.353 "data_size": 63488 00:10:05.353 }, 00:10:05.353 { 00:10:05.353 "name": "BaseBdev2", 00:10:05.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.353 "is_configured": false, 00:10:05.353 "data_offset": 0, 00:10:05.353 "data_size": 0 00:10:05.353 }, 00:10:05.353 { 00:10:05.353 "name": "BaseBdev3", 00:10:05.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.353 "is_configured": false, 00:10:05.353 "data_offset": 0, 00:10:05.353 "data_size": 0 00:10:05.353 }, 00:10:05.353 { 00:10:05.353 "name": "BaseBdev4", 00:10:05.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.353 "is_configured": false, 00:10:05.353 "data_offset": 0, 00:10:05.353 "data_size": 0 00:10:05.353 } 00:10:05.353 ] 00:10:05.353 }' 00:10:05.353 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.353 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.613 [2024-11-18 10:38:31.455050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.613 [2024-11-18 10:38:31.455095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.613 [2024-11-18 10:38:31.463120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.613 [2024-11-18 10:38:31.465228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.613 [2024-11-18 10:38:31.465297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.613 [2024-11-18 10:38:31.465311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.613 [2024-11-18 10:38:31.465321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.613 [2024-11-18 10:38:31.465328] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.613 [2024-11-18 10:38:31.465336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.613 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.872 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.872 "name": "Existed_Raid", 00:10:05.872 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:05.872 "strip_size_kb": 64, 00:10:05.872 "state": "configuring", 00:10:05.872 "raid_level": "raid0", 00:10:05.872 "superblock": true, 00:10:05.872 "num_base_bdevs": 4, 00:10:05.872 "num_base_bdevs_discovered": 1, 00:10:05.872 "num_base_bdevs_operational": 4, 00:10:05.872 "base_bdevs_list": [ 00:10:05.872 { 00:10:05.872 "name": "BaseBdev1", 00:10:05.872 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:05.872 "is_configured": true, 00:10:05.872 "data_offset": 2048, 00:10:05.872 "data_size": 63488 00:10:05.873 }, 00:10:05.873 { 00:10:05.873 "name": "BaseBdev2", 00:10:05.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.873 "is_configured": false, 00:10:05.873 "data_offset": 0, 00:10:05.873 "data_size": 0 00:10:05.873 }, 00:10:05.873 { 00:10:05.873 "name": "BaseBdev3", 00:10:05.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.873 "is_configured": false, 00:10:05.873 "data_offset": 0, 00:10:05.873 "data_size": 0 00:10:05.873 }, 00:10:05.873 { 00:10:05.873 "name": "BaseBdev4", 00:10:05.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.873 "is_configured": false, 00:10:05.873 "data_offset": 0, 00:10:05.873 "data_size": 0 00:10:05.873 } 00:10:05.873 ] 00:10:05.873 }' 00:10:05.873 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.873 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.132 [2024-11-18 10:38:31.901445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.132 BaseBdev2 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.132 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.132 [ 00:10:06.132 { 00:10:06.132 "name": "BaseBdev2", 00:10:06.132 "aliases": [ 00:10:06.132 "064a4ae0-669d-4650-8b9d-9b247771032c" 00:10:06.132 ], 00:10:06.132 "product_name": "Malloc disk", 00:10:06.132 "block_size": 512, 00:10:06.132 "num_blocks": 65536, 00:10:06.132 "uuid": "064a4ae0-669d-4650-8b9d-9b247771032c", 00:10:06.132 "assigned_rate_limits": { 00:10:06.132 "rw_ios_per_sec": 0, 00:10:06.132 "rw_mbytes_per_sec": 0, 00:10:06.132 "r_mbytes_per_sec": 0, 00:10:06.132 "w_mbytes_per_sec": 0 00:10:06.132 }, 00:10:06.132 "claimed": true, 00:10:06.132 "claim_type": "exclusive_write", 00:10:06.132 "zoned": false, 00:10:06.132 "supported_io_types": { 00:10:06.132 "read": true, 00:10:06.132 "write": true, 00:10:06.132 "unmap": true, 00:10:06.132 "flush": true, 00:10:06.132 "reset": true, 00:10:06.132 "nvme_admin": false, 00:10:06.132 "nvme_io": false, 00:10:06.132 "nvme_io_md": false, 00:10:06.132 "write_zeroes": true, 00:10:06.132 "zcopy": true, 00:10:06.132 "get_zone_info": false, 00:10:06.132 "zone_management": false, 00:10:06.132 "zone_append": false, 00:10:06.132 "compare": false, 00:10:06.132 "compare_and_write": false, 00:10:06.132 "abort": true, 00:10:06.133 "seek_hole": false, 00:10:06.133 "seek_data": false, 00:10:06.133 "copy": true, 00:10:06.133 "nvme_iov_md": false 00:10:06.133 }, 00:10:06.133 "memory_domains": [ 00:10:06.133 { 00:10:06.133 "dma_device_id": "system", 00:10:06.133 "dma_device_type": 1 00:10:06.133 }, 00:10:06.133 { 00:10:06.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.133 "dma_device_type": 2 00:10:06.133 } 00:10:06.133 ], 00:10:06.133 "driver_specific": {} 00:10:06.133 } 00:10:06.133 ] 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.133 "name": "Existed_Raid", 00:10:06.133 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:06.133 "strip_size_kb": 64, 00:10:06.133 "state": "configuring", 00:10:06.133 "raid_level": "raid0", 00:10:06.133 "superblock": true, 00:10:06.133 "num_base_bdevs": 4, 00:10:06.133 "num_base_bdevs_discovered": 2, 00:10:06.133 "num_base_bdevs_operational": 4, 00:10:06.133 "base_bdevs_list": [ 00:10:06.133 { 00:10:06.133 "name": "BaseBdev1", 00:10:06.133 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:06.133 "is_configured": true, 00:10:06.133 "data_offset": 2048, 00:10:06.133 "data_size": 63488 00:10:06.133 }, 00:10:06.133 { 00:10:06.133 "name": "BaseBdev2", 00:10:06.133 "uuid": "064a4ae0-669d-4650-8b9d-9b247771032c", 00:10:06.133 "is_configured": true, 00:10:06.133 "data_offset": 2048, 00:10:06.133 "data_size": 63488 00:10:06.133 }, 00:10:06.133 { 00:10:06.133 "name": "BaseBdev3", 00:10:06.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.133 "is_configured": false, 00:10:06.133 "data_offset": 0, 00:10:06.133 "data_size": 0 00:10:06.133 }, 00:10:06.133 { 00:10:06.133 "name": "BaseBdev4", 00:10:06.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.133 "is_configured": false, 00:10:06.133 "data_offset": 0, 00:10:06.133 "data_size": 0 00:10:06.133 } 00:10:06.133 ] 00:10:06.133 }' 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.133 10:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 [2024-11-18 10:38:32.439853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.701 BaseBdev3 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 [ 00:10:06.701 { 00:10:06.701 "name": "BaseBdev3", 00:10:06.701 "aliases": [ 00:10:06.701 "076608a7-19ed-4790-9585-44b25bbe9ecd" 00:10:06.701 ], 00:10:06.701 "product_name": "Malloc disk", 00:10:06.701 "block_size": 512, 00:10:06.701 "num_blocks": 65536, 00:10:06.701 "uuid": "076608a7-19ed-4790-9585-44b25bbe9ecd", 00:10:06.701 "assigned_rate_limits": { 00:10:06.701 "rw_ios_per_sec": 0, 00:10:06.701 "rw_mbytes_per_sec": 0, 00:10:06.701 "r_mbytes_per_sec": 0, 00:10:06.701 "w_mbytes_per_sec": 0 00:10:06.701 }, 00:10:06.701 "claimed": true, 00:10:06.701 "claim_type": "exclusive_write", 00:10:06.701 "zoned": false, 00:10:06.701 "supported_io_types": { 00:10:06.701 "read": true, 00:10:06.701 "write": true, 00:10:06.701 "unmap": true, 00:10:06.701 "flush": true, 00:10:06.701 "reset": true, 00:10:06.701 "nvme_admin": false, 00:10:06.701 "nvme_io": false, 00:10:06.701 "nvme_io_md": false, 00:10:06.701 "write_zeroes": true, 00:10:06.701 "zcopy": true, 00:10:06.701 "get_zone_info": false, 00:10:06.701 "zone_management": false, 00:10:06.701 "zone_append": false, 00:10:06.701 "compare": false, 00:10:06.701 "compare_and_write": false, 00:10:06.701 "abort": true, 00:10:06.701 "seek_hole": false, 00:10:06.701 "seek_data": false, 00:10:06.701 "copy": true, 00:10:06.701 "nvme_iov_md": false 00:10:06.701 }, 00:10:06.701 "memory_domains": [ 00:10:06.701 { 00:10:06.701 "dma_device_id": "system", 00:10:06.701 "dma_device_type": 1 00:10:06.701 }, 00:10:06.701 { 00:10:06.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.701 "dma_device_type": 2 00:10:06.701 } 00:10:06.701 ], 00:10:06.701 "driver_specific": {} 00:10:06.701 } 00:10:06.701 ] 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.701 "name": "Existed_Raid", 00:10:06.701 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:06.701 "strip_size_kb": 64, 00:10:06.701 "state": "configuring", 00:10:06.701 "raid_level": "raid0", 00:10:06.701 "superblock": true, 00:10:06.701 "num_base_bdevs": 4, 00:10:06.701 "num_base_bdevs_discovered": 3, 00:10:06.701 "num_base_bdevs_operational": 4, 00:10:06.701 "base_bdevs_list": [ 00:10:06.701 { 00:10:06.701 "name": "BaseBdev1", 00:10:06.701 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:06.701 "is_configured": true, 00:10:06.701 "data_offset": 2048, 00:10:06.701 "data_size": 63488 00:10:06.701 }, 00:10:06.701 { 00:10:06.701 "name": "BaseBdev2", 00:10:06.701 "uuid": "064a4ae0-669d-4650-8b9d-9b247771032c", 00:10:06.702 "is_configured": true, 00:10:06.702 "data_offset": 2048, 00:10:06.702 "data_size": 63488 00:10:06.702 }, 00:10:06.702 { 00:10:06.702 "name": "BaseBdev3", 00:10:06.702 "uuid": "076608a7-19ed-4790-9585-44b25bbe9ecd", 00:10:06.702 "is_configured": true, 00:10:06.702 "data_offset": 2048, 00:10:06.702 "data_size": 63488 00:10:06.702 }, 00:10:06.702 { 00:10:06.702 "name": "BaseBdev4", 00:10:06.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.702 "is_configured": false, 00:10:06.702 "data_offset": 0, 00:10:06.702 "data_size": 0 00:10:06.702 } 00:10:06.702 ] 00:10:06.702 }' 00:10:06.702 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.702 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.270 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.270 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.270 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.270 [2024-11-18 10:38:32.958832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.271 [2024-11-18 10:38:32.959228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.271 [2024-11-18 10:38:32.959300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:07.271 [2024-11-18 10:38:32.959631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:07.271 BaseBdev4 00:10:07.271 [2024-11-18 10:38:32.959838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.271 [2024-11-18 10:38:32.959864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:07.271 [2024-11-18 10:38:32.960013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.271 [ 00:10:07.271 { 00:10:07.271 "name": "BaseBdev4", 00:10:07.271 "aliases": [ 00:10:07.271 "662d6dd3-bf37-40f0-8914-6c3e8a1cc308" 00:10:07.271 ], 00:10:07.271 "product_name": "Malloc disk", 00:10:07.271 "block_size": 512, 00:10:07.271 "num_blocks": 65536, 00:10:07.271 "uuid": "662d6dd3-bf37-40f0-8914-6c3e8a1cc308", 00:10:07.271 "assigned_rate_limits": { 00:10:07.271 "rw_ios_per_sec": 0, 00:10:07.271 "rw_mbytes_per_sec": 0, 00:10:07.271 "r_mbytes_per_sec": 0, 00:10:07.271 "w_mbytes_per_sec": 0 00:10:07.271 }, 00:10:07.271 "claimed": true, 00:10:07.271 "claim_type": "exclusive_write", 00:10:07.271 "zoned": false, 00:10:07.271 "supported_io_types": { 00:10:07.271 "read": true, 00:10:07.271 "write": true, 00:10:07.271 "unmap": true, 00:10:07.271 "flush": true, 00:10:07.271 "reset": true, 00:10:07.271 "nvme_admin": false, 00:10:07.271 "nvme_io": false, 00:10:07.271 "nvme_io_md": false, 00:10:07.271 "write_zeroes": true, 00:10:07.271 "zcopy": true, 00:10:07.271 "get_zone_info": false, 00:10:07.271 "zone_management": false, 00:10:07.271 "zone_append": false, 00:10:07.271 "compare": false, 00:10:07.271 "compare_and_write": false, 00:10:07.271 "abort": true, 00:10:07.271 "seek_hole": false, 00:10:07.271 "seek_data": false, 00:10:07.271 "copy": true, 00:10:07.271 "nvme_iov_md": false 00:10:07.271 }, 00:10:07.271 "memory_domains": [ 00:10:07.271 { 00:10:07.271 "dma_device_id": "system", 00:10:07.271 "dma_device_type": 1 00:10:07.271 }, 00:10:07.271 { 00:10:07.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.271 "dma_device_type": 2 00:10:07.271 } 00:10:07.271 ], 00:10:07.271 "driver_specific": {} 00:10:07.271 } 00:10:07.271 ] 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.271 10:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.271 "name": "Existed_Raid", 00:10:07.271 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:07.271 "strip_size_kb": 64, 00:10:07.271 "state": "online", 00:10:07.271 "raid_level": "raid0", 00:10:07.271 "superblock": true, 00:10:07.271 "num_base_bdevs": 4, 00:10:07.271 "num_base_bdevs_discovered": 4, 00:10:07.271 "num_base_bdevs_operational": 4, 00:10:07.271 "base_bdevs_list": [ 00:10:07.271 { 00:10:07.271 "name": "BaseBdev1", 00:10:07.271 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:07.271 "is_configured": true, 00:10:07.271 "data_offset": 2048, 00:10:07.271 "data_size": 63488 00:10:07.271 }, 00:10:07.271 { 00:10:07.271 "name": "BaseBdev2", 00:10:07.271 "uuid": "064a4ae0-669d-4650-8b9d-9b247771032c", 00:10:07.271 "is_configured": true, 00:10:07.271 "data_offset": 2048, 00:10:07.271 "data_size": 63488 00:10:07.271 }, 00:10:07.271 { 00:10:07.271 "name": "BaseBdev3", 00:10:07.271 "uuid": "076608a7-19ed-4790-9585-44b25bbe9ecd", 00:10:07.271 "is_configured": true, 00:10:07.271 "data_offset": 2048, 00:10:07.271 "data_size": 63488 00:10:07.271 }, 00:10:07.271 { 00:10:07.271 "name": "BaseBdev4", 00:10:07.271 "uuid": "662d6dd3-bf37-40f0-8914-6c3e8a1cc308", 00:10:07.271 "is_configured": true, 00:10:07.271 "data_offset": 2048, 00:10:07.271 "data_size": 63488 00:10:07.271 } 00:10:07.271 ] 00:10:07.271 }' 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.271 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.530 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.530 [2024-11-18 10:38:33.398424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.791 "name": "Existed_Raid", 00:10:07.791 "aliases": [ 00:10:07.791 "2348c61b-248c-44fa-a40b-02a5b7103332" 00:10:07.791 ], 00:10:07.791 "product_name": "Raid Volume", 00:10:07.791 "block_size": 512, 00:10:07.791 "num_blocks": 253952, 00:10:07.791 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:07.791 "assigned_rate_limits": { 00:10:07.791 "rw_ios_per_sec": 0, 00:10:07.791 "rw_mbytes_per_sec": 0, 00:10:07.791 "r_mbytes_per_sec": 0, 00:10:07.791 "w_mbytes_per_sec": 0 00:10:07.791 }, 00:10:07.791 "claimed": false, 00:10:07.791 "zoned": false, 00:10:07.791 "supported_io_types": { 00:10:07.791 "read": true, 00:10:07.791 "write": true, 00:10:07.791 "unmap": true, 00:10:07.791 "flush": true, 00:10:07.791 "reset": true, 00:10:07.791 "nvme_admin": false, 00:10:07.791 "nvme_io": false, 00:10:07.791 "nvme_io_md": false, 00:10:07.791 "write_zeroes": true, 00:10:07.791 "zcopy": false, 00:10:07.791 "get_zone_info": false, 00:10:07.791 "zone_management": false, 00:10:07.791 "zone_append": false, 00:10:07.791 "compare": false, 00:10:07.791 "compare_and_write": false, 00:10:07.791 "abort": false, 00:10:07.791 "seek_hole": false, 00:10:07.791 "seek_data": false, 00:10:07.791 "copy": false, 00:10:07.791 "nvme_iov_md": false 00:10:07.791 }, 00:10:07.791 "memory_domains": [ 00:10:07.791 { 00:10:07.791 "dma_device_id": "system", 00:10:07.791 "dma_device_type": 1 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.791 "dma_device_type": 2 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "system", 00:10:07.791 "dma_device_type": 1 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.791 "dma_device_type": 2 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "system", 00:10:07.791 "dma_device_type": 1 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.791 "dma_device_type": 2 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "system", 00:10:07.791 "dma_device_type": 1 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.791 "dma_device_type": 2 00:10:07.791 } 00:10:07.791 ], 00:10:07.791 "driver_specific": { 00:10:07.791 "raid": { 00:10:07.791 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:07.791 "strip_size_kb": 64, 00:10:07.791 "state": "online", 00:10:07.791 "raid_level": "raid0", 00:10:07.791 "superblock": true, 00:10:07.791 "num_base_bdevs": 4, 00:10:07.791 "num_base_bdevs_discovered": 4, 00:10:07.791 "num_base_bdevs_operational": 4, 00:10:07.791 "base_bdevs_list": [ 00:10:07.791 { 00:10:07.791 "name": "BaseBdev1", 00:10:07.791 "uuid": "05dcc249-7a68-464a-8073-ff245b3bc9f9", 00:10:07.791 "is_configured": true, 00:10:07.791 "data_offset": 2048, 00:10:07.791 "data_size": 63488 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "name": "BaseBdev2", 00:10:07.791 "uuid": "064a4ae0-669d-4650-8b9d-9b247771032c", 00:10:07.791 "is_configured": true, 00:10:07.791 "data_offset": 2048, 00:10:07.791 "data_size": 63488 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "name": "BaseBdev3", 00:10:07.791 "uuid": "076608a7-19ed-4790-9585-44b25bbe9ecd", 00:10:07.791 "is_configured": true, 00:10:07.791 "data_offset": 2048, 00:10:07.791 "data_size": 63488 00:10:07.791 }, 00:10:07.791 { 00:10:07.791 "name": "BaseBdev4", 00:10:07.791 "uuid": "662d6dd3-bf37-40f0-8914-6c3e8a1cc308", 00:10:07.791 "is_configured": true, 00:10:07.791 "data_offset": 2048, 00:10:07.791 "data_size": 63488 00:10:07.791 } 00:10:07.791 ] 00:10:07.791 } 00:10:07.791 } 00:10:07.791 }' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.791 BaseBdev2 00:10:07.791 BaseBdev3 00:10:07.791 BaseBdev4' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.791 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.051 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.052 [2024-11-18 10:38:33.721591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.052 [2024-11-18 10:38:33.721617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.052 [2024-11-18 10:38:33.721662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.052 "name": "Existed_Raid", 00:10:08.052 "uuid": "2348c61b-248c-44fa-a40b-02a5b7103332", 00:10:08.052 "strip_size_kb": 64, 00:10:08.052 "state": "offline", 00:10:08.052 "raid_level": "raid0", 00:10:08.052 "superblock": true, 00:10:08.052 "num_base_bdevs": 4, 00:10:08.052 "num_base_bdevs_discovered": 3, 00:10:08.052 "num_base_bdevs_operational": 3, 00:10:08.052 "base_bdevs_list": [ 00:10:08.052 { 00:10:08.052 "name": null, 00:10:08.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.052 "is_configured": false, 00:10:08.052 "data_offset": 0, 00:10:08.052 "data_size": 63488 00:10:08.052 }, 00:10:08.052 { 00:10:08.052 "name": "BaseBdev2", 00:10:08.052 "uuid": "064a4ae0-669d-4650-8b9d-9b247771032c", 00:10:08.052 "is_configured": true, 00:10:08.052 "data_offset": 2048, 00:10:08.052 "data_size": 63488 00:10:08.052 }, 00:10:08.052 { 00:10:08.052 "name": "BaseBdev3", 00:10:08.052 "uuid": "076608a7-19ed-4790-9585-44b25bbe9ecd", 00:10:08.052 "is_configured": true, 00:10:08.052 "data_offset": 2048, 00:10:08.052 "data_size": 63488 00:10:08.052 }, 00:10:08.052 { 00:10:08.052 "name": "BaseBdev4", 00:10:08.052 "uuid": "662d6dd3-bf37-40f0-8914-6c3e8a1cc308", 00:10:08.052 "is_configured": true, 00:10:08.052 "data_offset": 2048, 00:10:08.052 "data_size": 63488 00:10:08.052 } 00:10:08.052 ] 00:10:08.052 }' 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.052 10:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-11-18 10:38:34.311277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-11-18 10:38:34.469321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.881 [2024-11-18 10:38:34.625386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:08.881 [2024-11-18 10:38:34.625506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.881 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.141 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 BaseBdev2 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 [ 00:10:09.142 { 00:10:09.142 "name": "BaseBdev2", 00:10:09.142 "aliases": [ 00:10:09.142 "a86f500d-adbe-4c21-a4db-587769620082" 00:10:09.142 ], 00:10:09.142 "product_name": "Malloc disk", 00:10:09.142 "block_size": 512, 00:10:09.142 "num_blocks": 65536, 00:10:09.142 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:09.142 "assigned_rate_limits": { 00:10:09.142 "rw_ios_per_sec": 0, 00:10:09.142 "rw_mbytes_per_sec": 0, 00:10:09.142 "r_mbytes_per_sec": 0, 00:10:09.142 "w_mbytes_per_sec": 0 00:10:09.142 }, 00:10:09.142 "claimed": false, 00:10:09.142 "zoned": false, 00:10:09.142 "supported_io_types": { 00:10:09.142 "read": true, 00:10:09.142 "write": true, 00:10:09.142 "unmap": true, 00:10:09.142 "flush": true, 00:10:09.142 "reset": true, 00:10:09.142 "nvme_admin": false, 00:10:09.142 "nvme_io": false, 00:10:09.142 "nvme_io_md": false, 00:10:09.142 "write_zeroes": true, 00:10:09.142 "zcopy": true, 00:10:09.142 "get_zone_info": false, 00:10:09.142 "zone_management": false, 00:10:09.142 "zone_append": false, 00:10:09.142 "compare": false, 00:10:09.142 "compare_and_write": false, 00:10:09.142 "abort": true, 00:10:09.142 "seek_hole": false, 00:10:09.142 "seek_data": false, 00:10:09.142 "copy": true, 00:10:09.142 "nvme_iov_md": false 00:10:09.142 }, 00:10:09.142 "memory_domains": [ 00:10:09.142 { 00:10:09.142 "dma_device_id": "system", 00:10:09.142 "dma_device_type": 1 00:10:09.142 }, 00:10:09.142 { 00:10:09.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.142 "dma_device_type": 2 00:10:09.142 } 00:10:09.142 ], 00:10:09.142 "driver_specific": {} 00:10:09.142 } 00:10:09.142 ] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 BaseBdev3 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 [ 00:10:09.142 { 00:10:09.142 "name": "BaseBdev3", 00:10:09.142 "aliases": [ 00:10:09.142 "8f3f6e29-574b-4293-9391-475533641884" 00:10:09.142 ], 00:10:09.142 "product_name": "Malloc disk", 00:10:09.142 "block_size": 512, 00:10:09.142 "num_blocks": 65536, 00:10:09.142 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:09.142 "assigned_rate_limits": { 00:10:09.142 "rw_ios_per_sec": 0, 00:10:09.142 "rw_mbytes_per_sec": 0, 00:10:09.142 "r_mbytes_per_sec": 0, 00:10:09.142 "w_mbytes_per_sec": 0 00:10:09.142 }, 00:10:09.142 "claimed": false, 00:10:09.142 "zoned": false, 00:10:09.142 "supported_io_types": { 00:10:09.142 "read": true, 00:10:09.142 "write": true, 00:10:09.142 "unmap": true, 00:10:09.142 "flush": true, 00:10:09.142 "reset": true, 00:10:09.142 "nvme_admin": false, 00:10:09.142 "nvme_io": false, 00:10:09.142 "nvme_io_md": false, 00:10:09.142 "write_zeroes": true, 00:10:09.142 "zcopy": true, 00:10:09.142 "get_zone_info": false, 00:10:09.142 "zone_management": false, 00:10:09.142 "zone_append": false, 00:10:09.142 "compare": false, 00:10:09.142 "compare_and_write": false, 00:10:09.142 "abort": true, 00:10:09.142 "seek_hole": false, 00:10:09.142 "seek_data": false, 00:10:09.142 "copy": true, 00:10:09.142 "nvme_iov_md": false 00:10:09.142 }, 00:10:09.142 "memory_domains": [ 00:10:09.142 { 00:10:09.142 "dma_device_id": "system", 00:10:09.142 "dma_device_type": 1 00:10:09.142 }, 00:10:09.142 { 00:10:09.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.142 "dma_device_type": 2 00:10:09.142 } 00:10:09.142 ], 00:10:09.142 "driver_specific": {} 00:10:09.142 } 00:10:09.142 ] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 BaseBdev4 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.142 10:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.142 [ 00:10:09.142 { 00:10:09.142 "name": "BaseBdev4", 00:10:09.142 "aliases": [ 00:10:09.142 "4d4d1482-0e2a-4f07-8b30-1145d4db000e" 00:10:09.142 ], 00:10:09.142 "product_name": "Malloc disk", 00:10:09.142 "block_size": 512, 00:10:09.142 "num_blocks": 65536, 00:10:09.143 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:09.143 "assigned_rate_limits": { 00:10:09.143 "rw_ios_per_sec": 0, 00:10:09.143 "rw_mbytes_per_sec": 0, 00:10:09.143 "r_mbytes_per_sec": 0, 00:10:09.143 "w_mbytes_per_sec": 0 00:10:09.143 }, 00:10:09.143 "claimed": false, 00:10:09.143 "zoned": false, 00:10:09.143 "supported_io_types": { 00:10:09.143 "read": true, 00:10:09.143 "write": true, 00:10:09.143 "unmap": true, 00:10:09.143 "flush": true, 00:10:09.143 "reset": true, 00:10:09.143 "nvme_admin": false, 00:10:09.143 "nvme_io": false, 00:10:09.143 "nvme_io_md": false, 00:10:09.143 "write_zeroes": true, 00:10:09.143 "zcopy": true, 00:10:09.143 "get_zone_info": false, 00:10:09.143 "zone_management": false, 00:10:09.143 "zone_append": false, 00:10:09.143 "compare": false, 00:10:09.143 "compare_and_write": false, 00:10:09.143 "abort": true, 00:10:09.143 "seek_hole": false, 00:10:09.143 "seek_data": false, 00:10:09.143 "copy": true, 00:10:09.143 "nvme_iov_md": false 00:10:09.143 }, 00:10:09.143 "memory_domains": [ 00:10:09.143 { 00:10:09.143 "dma_device_id": "system", 00:10:09.143 "dma_device_type": 1 00:10:09.143 }, 00:10:09.143 { 00:10:09.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.143 "dma_device_type": 2 00:10:09.143 } 00:10:09.143 ], 00:10:09.143 "driver_specific": {} 00:10:09.143 } 00:10:09.143 ] 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.143 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.402 [2024-11-18 10:38:35.024938] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.402 [2024-11-18 10:38:35.025065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.402 [2024-11-18 10:38:35.025108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.402 [2024-11-18 10:38:35.027277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.402 [2024-11-18 10:38:35.027381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.402 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.402 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.402 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.402 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.403 "name": "Existed_Raid", 00:10:09.403 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:09.403 "strip_size_kb": 64, 00:10:09.403 "state": "configuring", 00:10:09.403 "raid_level": "raid0", 00:10:09.403 "superblock": true, 00:10:09.403 "num_base_bdevs": 4, 00:10:09.403 "num_base_bdevs_discovered": 3, 00:10:09.403 "num_base_bdevs_operational": 4, 00:10:09.403 "base_bdevs_list": [ 00:10:09.403 { 00:10:09.403 "name": "BaseBdev1", 00:10:09.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.403 "is_configured": false, 00:10:09.403 "data_offset": 0, 00:10:09.403 "data_size": 0 00:10:09.403 }, 00:10:09.403 { 00:10:09.403 "name": "BaseBdev2", 00:10:09.403 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:09.403 "is_configured": true, 00:10:09.403 "data_offset": 2048, 00:10:09.403 "data_size": 63488 00:10:09.403 }, 00:10:09.403 { 00:10:09.403 "name": "BaseBdev3", 00:10:09.403 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:09.403 "is_configured": true, 00:10:09.403 "data_offset": 2048, 00:10:09.403 "data_size": 63488 00:10:09.403 }, 00:10:09.403 { 00:10:09.403 "name": "BaseBdev4", 00:10:09.403 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:09.403 "is_configured": true, 00:10:09.403 "data_offset": 2048, 00:10:09.403 "data_size": 63488 00:10:09.403 } 00:10:09.403 ] 00:10:09.403 }' 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.403 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.663 [2024-11-18 10:38:35.464137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.663 "name": "Existed_Raid", 00:10:09.663 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:09.663 "strip_size_kb": 64, 00:10:09.663 "state": "configuring", 00:10:09.663 "raid_level": "raid0", 00:10:09.663 "superblock": true, 00:10:09.663 "num_base_bdevs": 4, 00:10:09.663 "num_base_bdevs_discovered": 2, 00:10:09.663 "num_base_bdevs_operational": 4, 00:10:09.663 "base_bdevs_list": [ 00:10:09.663 { 00:10:09.663 "name": "BaseBdev1", 00:10:09.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.663 "is_configured": false, 00:10:09.663 "data_offset": 0, 00:10:09.663 "data_size": 0 00:10:09.663 }, 00:10:09.663 { 00:10:09.663 "name": null, 00:10:09.663 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:09.663 "is_configured": false, 00:10:09.663 "data_offset": 0, 00:10:09.663 "data_size": 63488 00:10:09.663 }, 00:10:09.663 { 00:10:09.663 "name": "BaseBdev3", 00:10:09.663 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:09.663 "is_configured": true, 00:10:09.663 "data_offset": 2048, 00:10:09.663 "data_size": 63488 00:10:09.663 }, 00:10:09.663 { 00:10:09.663 "name": "BaseBdev4", 00:10:09.663 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:09.663 "is_configured": true, 00:10:09.663 "data_offset": 2048, 00:10:09.663 "data_size": 63488 00:10:09.663 } 00:10:09.663 ] 00:10:09.663 }' 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.663 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 [2024-11-18 10:38:35.972107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.232 BaseBdev1 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.232 10:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 [ 00:10:10.232 { 00:10:10.232 "name": "BaseBdev1", 00:10:10.232 "aliases": [ 00:10:10.232 "f5d11d1e-2a41-41c4-87d4-4e43826957c5" 00:10:10.232 ], 00:10:10.232 "product_name": "Malloc disk", 00:10:10.232 "block_size": 512, 00:10:10.232 "num_blocks": 65536, 00:10:10.232 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:10.232 "assigned_rate_limits": { 00:10:10.232 "rw_ios_per_sec": 0, 00:10:10.232 "rw_mbytes_per_sec": 0, 00:10:10.232 "r_mbytes_per_sec": 0, 00:10:10.232 "w_mbytes_per_sec": 0 00:10:10.232 }, 00:10:10.232 "claimed": true, 00:10:10.232 "claim_type": "exclusive_write", 00:10:10.232 "zoned": false, 00:10:10.232 "supported_io_types": { 00:10:10.232 "read": true, 00:10:10.232 "write": true, 00:10:10.232 "unmap": true, 00:10:10.232 "flush": true, 00:10:10.232 "reset": true, 00:10:10.232 "nvme_admin": false, 00:10:10.232 "nvme_io": false, 00:10:10.232 "nvme_io_md": false, 00:10:10.232 "write_zeroes": true, 00:10:10.232 "zcopy": true, 00:10:10.232 "get_zone_info": false, 00:10:10.232 "zone_management": false, 00:10:10.232 "zone_append": false, 00:10:10.232 "compare": false, 00:10:10.232 "compare_and_write": false, 00:10:10.232 "abort": true, 00:10:10.232 "seek_hole": false, 00:10:10.232 "seek_data": false, 00:10:10.232 "copy": true, 00:10:10.232 "nvme_iov_md": false 00:10:10.232 }, 00:10:10.232 "memory_domains": [ 00:10:10.232 { 00:10:10.232 "dma_device_id": "system", 00:10:10.232 "dma_device_type": 1 00:10:10.232 }, 00:10:10.232 { 00:10:10.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.232 "dma_device_type": 2 00:10:10.232 } 00:10:10.232 ], 00:10:10.232 "driver_specific": {} 00:10:10.232 } 00:10:10.232 ] 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.232 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.232 "name": "Existed_Raid", 00:10:10.232 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:10.232 "strip_size_kb": 64, 00:10:10.232 "state": "configuring", 00:10:10.232 "raid_level": "raid0", 00:10:10.232 "superblock": true, 00:10:10.232 "num_base_bdevs": 4, 00:10:10.232 "num_base_bdevs_discovered": 3, 00:10:10.232 "num_base_bdevs_operational": 4, 00:10:10.232 "base_bdevs_list": [ 00:10:10.232 { 00:10:10.232 "name": "BaseBdev1", 00:10:10.232 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:10.232 "is_configured": true, 00:10:10.233 "data_offset": 2048, 00:10:10.233 "data_size": 63488 00:10:10.233 }, 00:10:10.233 { 00:10:10.233 "name": null, 00:10:10.233 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:10.233 "is_configured": false, 00:10:10.233 "data_offset": 0, 00:10:10.233 "data_size": 63488 00:10:10.233 }, 00:10:10.233 { 00:10:10.233 "name": "BaseBdev3", 00:10:10.233 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:10.233 "is_configured": true, 00:10:10.233 "data_offset": 2048, 00:10:10.233 "data_size": 63488 00:10:10.233 }, 00:10:10.233 { 00:10:10.233 "name": "BaseBdev4", 00:10:10.233 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:10.233 "is_configured": true, 00:10:10.233 "data_offset": 2048, 00:10:10.233 "data_size": 63488 00:10:10.233 } 00:10:10.233 ] 00:10:10.233 }' 00:10:10.233 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.233 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.801 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 [2024-11-18 10:38:36.519231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.802 "name": "Existed_Raid", 00:10:10.802 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:10.802 "strip_size_kb": 64, 00:10:10.802 "state": "configuring", 00:10:10.802 "raid_level": "raid0", 00:10:10.802 "superblock": true, 00:10:10.802 "num_base_bdevs": 4, 00:10:10.802 "num_base_bdevs_discovered": 2, 00:10:10.802 "num_base_bdevs_operational": 4, 00:10:10.802 "base_bdevs_list": [ 00:10:10.802 { 00:10:10.802 "name": "BaseBdev1", 00:10:10.802 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:10.802 "is_configured": true, 00:10:10.802 "data_offset": 2048, 00:10:10.802 "data_size": 63488 00:10:10.802 }, 00:10:10.802 { 00:10:10.802 "name": null, 00:10:10.802 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:10.802 "is_configured": false, 00:10:10.802 "data_offset": 0, 00:10:10.802 "data_size": 63488 00:10:10.802 }, 00:10:10.802 { 00:10:10.802 "name": null, 00:10:10.802 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:10.802 "is_configured": false, 00:10:10.802 "data_offset": 0, 00:10:10.802 "data_size": 63488 00:10:10.802 }, 00:10:10.802 { 00:10:10.802 "name": "BaseBdev4", 00:10:10.802 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:10.802 "is_configured": true, 00:10:10.802 "data_offset": 2048, 00:10:10.802 "data_size": 63488 00:10:10.802 } 00:10:10.802 ] 00:10:10.802 }' 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.802 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.371 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.371 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 10:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.371 10:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 [2024-11-18 10:38:37.030591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.371 "name": "Existed_Raid", 00:10:11.371 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:11.371 "strip_size_kb": 64, 00:10:11.371 "state": "configuring", 00:10:11.371 "raid_level": "raid0", 00:10:11.371 "superblock": true, 00:10:11.371 "num_base_bdevs": 4, 00:10:11.371 "num_base_bdevs_discovered": 3, 00:10:11.371 "num_base_bdevs_operational": 4, 00:10:11.371 "base_bdevs_list": [ 00:10:11.371 { 00:10:11.371 "name": "BaseBdev1", 00:10:11.371 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:11.371 "is_configured": true, 00:10:11.371 "data_offset": 2048, 00:10:11.371 "data_size": 63488 00:10:11.371 }, 00:10:11.371 { 00:10:11.371 "name": null, 00:10:11.371 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:11.371 "is_configured": false, 00:10:11.371 "data_offset": 0, 00:10:11.371 "data_size": 63488 00:10:11.371 }, 00:10:11.371 { 00:10:11.371 "name": "BaseBdev3", 00:10:11.371 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:11.371 "is_configured": true, 00:10:11.371 "data_offset": 2048, 00:10:11.371 "data_size": 63488 00:10:11.371 }, 00:10:11.371 { 00:10:11.371 "name": "BaseBdev4", 00:10:11.371 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:11.371 "is_configured": true, 00:10:11.371 "data_offset": 2048, 00:10:11.371 "data_size": 63488 00:10:11.371 } 00:10:11.371 ] 00:10:11.371 }' 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.371 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.630 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.631 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.631 [2024-11-18 10:38:37.481850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.890 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.891 "name": "Existed_Raid", 00:10:11.891 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:11.891 "strip_size_kb": 64, 00:10:11.891 "state": "configuring", 00:10:11.891 "raid_level": "raid0", 00:10:11.891 "superblock": true, 00:10:11.891 "num_base_bdevs": 4, 00:10:11.891 "num_base_bdevs_discovered": 2, 00:10:11.891 "num_base_bdevs_operational": 4, 00:10:11.891 "base_bdevs_list": [ 00:10:11.891 { 00:10:11.891 "name": null, 00:10:11.891 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:11.891 "is_configured": false, 00:10:11.891 "data_offset": 0, 00:10:11.891 "data_size": 63488 00:10:11.891 }, 00:10:11.891 { 00:10:11.891 "name": null, 00:10:11.891 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:11.891 "is_configured": false, 00:10:11.891 "data_offset": 0, 00:10:11.891 "data_size": 63488 00:10:11.891 }, 00:10:11.891 { 00:10:11.891 "name": "BaseBdev3", 00:10:11.891 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:11.891 "is_configured": true, 00:10:11.891 "data_offset": 2048, 00:10:11.891 "data_size": 63488 00:10:11.891 }, 00:10:11.891 { 00:10:11.891 "name": "BaseBdev4", 00:10:11.891 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:11.891 "is_configured": true, 00:10:11.891 "data_offset": 2048, 00:10:11.891 "data_size": 63488 00:10:11.891 } 00:10:11.891 ] 00:10:11.891 }' 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.891 10:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.151 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.151 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.151 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.151 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.151 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.411 [2024-11-18 10:38:38.074995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.411 "name": "Existed_Raid", 00:10:12.411 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:12.411 "strip_size_kb": 64, 00:10:12.411 "state": "configuring", 00:10:12.411 "raid_level": "raid0", 00:10:12.411 "superblock": true, 00:10:12.411 "num_base_bdevs": 4, 00:10:12.411 "num_base_bdevs_discovered": 3, 00:10:12.411 "num_base_bdevs_operational": 4, 00:10:12.411 "base_bdevs_list": [ 00:10:12.411 { 00:10:12.411 "name": null, 00:10:12.411 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:12.411 "is_configured": false, 00:10:12.411 "data_offset": 0, 00:10:12.411 "data_size": 63488 00:10:12.411 }, 00:10:12.411 { 00:10:12.411 "name": "BaseBdev2", 00:10:12.411 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:12.411 "is_configured": true, 00:10:12.411 "data_offset": 2048, 00:10:12.411 "data_size": 63488 00:10:12.411 }, 00:10:12.411 { 00:10:12.411 "name": "BaseBdev3", 00:10:12.411 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:12.411 "is_configured": true, 00:10:12.411 "data_offset": 2048, 00:10:12.411 "data_size": 63488 00:10:12.411 }, 00:10:12.411 { 00:10:12.411 "name": "BaseBdev4", 00:10:12.411 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:12.411 "is_configured": true, 00:10:12.411 "data_offset": 2048, 00:10:12.411 "data_size": 63488 00:10:12.411 } 00:10:12.411 ] 00:10:12.411 }' 00:10:12.411 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.412 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.671 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.931 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f5d11d1e-2a41-41c4-87d4-4e43826957c5 00:10:12.931 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.931 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.931 [2024-11-18 10:38:38.615275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.931 [2024-11-18 10:38:38.615591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.931 NewBaseBdev 00:10:12.931 [2024-11-18 10:38:38.615640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:12.931 [2024-11-18 10:38:38.615947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:12.932 [2024-11-18 10:38:38.616109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.932 [2024-11-18 10:38:38.616122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:12.932 [2024-11-18 10:38:38.616283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.932 [ 00:10:12.932 { 00:10:12.932 "name": "NewBaseBdev", 00:10:12.932 "aliases": [ 00:10:12.932 "f5d11d1e-2a41-41c4-87d4-4e43826957c5" 00:10:12.932 ], 00:10:12.932 "product_name": "Malloc disk", 00:10:12.932 "block_size": 512, 00:10:12.932 "num_blocks": 65536, 00:10:12.932 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:12.932 "assigned_rate_limits": { 00:10:12.932 "rw_ios_per_sec": 0, 00:10:12.932 "rw_mbytes_per_sec": 0, 00:10:12.932 "r_mbytes_per_sec": 0, 00:10:12.932 "w_mbytes_per_sec": 0 00:10:12.932 }, 00:10:12.932 "claimed": true, 00:10:12.932 "claim_type": "exclusive_write", 00:10:12.932 "zoned": false, 00:10:12.932 "supported_io_types": { 00:10:12.932 "read": true, 00:10:12.932 "write": true, 00:10:12.932 "unmap": true, 00:10:12.932 "flush": true, 00:10:12.932 "reset": true, 00:10:12.932 "nvme_admin": false, 00:10:12.932 "nvme_io": false, 00:10:12.932 "nvme_io_md": false, 00:10:12.932 "write_zeroes": true, 00:10:12.932 "zcopy": true, 00:10:12.932 "get_zone_info": false, 00:10:12.932 "zone_management": false, 00:10:12.932 "zone_append": false, 00:10:12.932 "compare": false, 00:10:12.932 "compare_and_write": false, 00:10:12.932 "abort": true, 00:10:12.932 "seek_hole": false, 00:10:12.932 "seek_data": false, 00:10:12.932 "copy": true, 00:10:12.932 "nvme_iov_md": false 00:10:12.932 }, 00:10:12.932 "memory_domains": [ 00:10:12.932 { 00:10:12.932 "dma_device_id": "system", 00:10:12.932 "dma_device_type": 1 00:10:12.932 }, 00:10:12.932 { 00:10:12.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.932 "dma_device_type": 2 00:10:12.932 } 00:10:12.932 ], 00:10:12.932 "driver_specific": {} 00:10:12.932 } 00:10:12.932 ] 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.932 "name": "Existed_Raid", 00:10:12.932 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:12.932 "strip_size_kb": 64, 00:10:12.932 "state": "online", 00:10:12.932 "raid_level": "raid0", 00:10:12.932 "superblock": true, 00:10:12.932 "num_base_bdevs": 4, 00:10:12.932 "num_base_bdevs_discovered": 4, 00:10:12.932 "num_base_bdevs_operational": 4, 00:10:12.932 "base_bdevs_list": [ 00:10:12.932 { 00:10:12.932 "name": "NewBaseBdev", 00:10:12.932 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:12.932 "is_configured": true, 00:10:12.932 "data_offset": 2048, 00:10:12.932 "data_size": 63488 00:10:12.932 }, 00:10:12.932 { 00:10:12.932 "name": "BaseBdev2", 00:10:12.932 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:12.932 "is_configured": true, 00:10:12.932 "data_offset": 2048, 00:10:12.932 "data_size": 63488 00:10:12.932 }, 00:10:12.932 { 00:10:12.932 "name": "BaseBdev3", 00:10:12.932 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:12.932 "is_configured": true, 00:10:12.932 "data_offset": 2048, 00:10:12.932 "data_size": 63488 00:10:12.932 }, 00:10:12.932 { 00:10:12.932 "name": "BaseBdev4", 00:10:12.932 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:12.932 "is_configured": true, 00:10:12.932 "data_offset": 2048, 00:10:12.932 "data_size": 63488 00:10:12.932 } 00:10:12.932 ] 00:10:12.932 }' 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.932 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.500 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.500 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.500 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.500 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.500 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.501 [2024-11-18 10:38:39.094968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.501 "name": "Existed_Raid", 00:10:13.501 "aliases": [ 00:10:13.501 "899afe05-0fff-4995-9612-ed5e4231f051" 00:10:13.501 ], 00:10:13.501 "product_name": "Raid Volume", 00:10:13.501 "block_size": 512, 00:10:13.501 "num_blocks": 253952, 00:10:13.501 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:13.501 "assigned_rate_limits": { 00:10:13.501 "rw_ios_per_sec": 0, 00:10:13.501 "rw_mbytes_per_sec": 0, 00:10:13.501 "r_mbytes_per_sec": 0, 00:10:13.501 "w_mbytes_per_sec": 0 00:10:13.501 }, 00:10:13.501 "claimed": false, 00:10:13.501 "zoned": false, 00:10:13.501 "supported_io_types": { 00:10:13.501 "read": true, 00:10:13.501 "write": true, 00:10:13.501 "unmap": true, 00:10:13.501 "flush": true, 00:10:13.501 "reset": true, 00:10:13.501 "nvme_admin": false, 00:10:13.501 "nvme_io": false, 00:10:13.501 "nvme_io_md": false, 00:10:13.501 "write_zeroes": true, 00:10:13.501 "zcopy": false, 00:10:13.501 "get_zone_info": false, 00:10:13.501 "zone_management": false, 00:10:13.501 "zone_append": false, 00:10:13.501 "compare": false, 00:10:13.501 "compare_and_write": false, 00:10:13.501 "abort": false, 00:10:13.501 "seek_hole": false, 00:10:13.501 "seek_data": false, 00:10:13.501 "copy": false, 00:10:13.501 "nvme_iov_md": false 00:10:13.501 }, 00:10:13.501 "memory_domains": [ 00:10:13.501 { 00:10:13.501 "dma_device_id": "system", 00:10:13.501 "dma_device_type": 1 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.501 "dma_device_type": 2 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "system", 00:10:13.501 "dma_device_type": 1 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.501 "dma_device_type": 2 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "system", 00:10:13.501 "dma_device_type": 1 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.501 "dma_device_type": 2 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "system", 00:10:13.501 "dma_device_type": 1 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.501 "dma_device_type": 2 00:10:13.501 } 00:10:13.501 ], 00:10:13.501 "driver_specific": { 00:10:13.501 "raid": { 00:10:13.501 "uuid": "899afe05-0fff-4995-9612-ed5e4231f051", 00:10:13.501 "strip_size_kb": 64, 00:10:13.501 "state": "online", 00:10:13.501 "raid_level": "raid0", 00:10:13.501 "superblock": true, 00:10:13.501 "num_base_bdevs": 4, 00:10:13.501 "num_base_bdevs_discovered": 4, 00:10:13.501 "num_base_bdevs_operational": 4, 00:10:13.501 "base_bdevs_list": [ 00:10:13.501 { 00:10:13.501 "name": "NewBaseBdev", 00:10:13.501 "uuid": "f5d11d1e-2a41-41c4-87d4-4e43826957c5", 00:10:13.501 "is_configured": true, 00:10:13.501 "data_offset": 2048, 00:10:13.501 "data_size": 63488 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "name": "BaseBdev2", 00:10:13.501 "uuid": "a86f500d-adbe-4c21-a4db-587769620082", 00:10:13.501 "is_configured": true, 00:10:13.501 "data_offset": 2048, 00:10:13.501 "data_size": 63488 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "name": "BaseBdev3", 00:10:13.501 "uuid": "8f3f6e29-574b-4293-9391-475533641884", 00:10:13.501 "is_configured": true, 00:10:13.501 "data_offset": 2048, 00:10:13.501 "data_size": 63488 00:10:13.501 }, 00:10:13.501 { 00:10:13.501 "name": "BaseBdev4", 00:10:13.501 "uuid": "4d4d1482-0e2a-4f07-8b30-1145d4db000e", 00:10:13.501 "is_configured": true, 00:10:13.501 "data_offset": 2048, 00:10:13.501 "data_size": 63488 00:10:13.501 } 00:10:13.501 ] 00:10:13.501 } 00:10:13.501 } 00:10:13.501 }' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.501 BaseBdev2 00:10:13.501 BaseBdev3 00:10:13.501 BaseBdev4' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.501 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.760 [2024-11-18 10:38:39.406057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.760 [2024-11-18 10:38:39.406086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.760 [2024-11-18 10:38:39.406162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.760 [2024-11-18 10:38:39.406244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.760 [2024-11-18 10:38:39.406254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69915 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69915 ']' 00:10:13.760 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69915 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69915 00:10:13.761 killing process with pid 69915 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69915' 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69915 00:10:13.761 [2024-11-18 10:38:39.450932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.761 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69915 00:10:14.020 [2024-11-18 10:38:39.864622] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.403 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.403 00:10:15.403 real 0m11.526s 00:10:15.403 user 0m17.994s 00:10:15.403 sys 0m2.204s 00:10:15.403 ************************************ 00:10:15.403 END TEST raid_state_function_test_sb 00:10:15.403 ************************************ 00:10:15.403 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.403 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.403 10:38:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:15.403 10:38:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.403 10:38:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.403 10:38:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.403 ************************************ 00:10:15.403 START TEST raid_superblock_test 00:10:15.403 ************************************ 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70580 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70580 00:10:15.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70580 ']' 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.403 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.403 [2024-11-18 10:38:41.183936] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:15.403 [2024-11-18 10:38:41.184135] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70580 ] 00:10:15.663 [2024-11-18 10:38:41.356009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.664 [2024-11-18 10:38:41.489144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.924 [2024-11-18 10:38:41.718769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.924 [2024-11-18 10:38:41.718903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.184 10:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.184 malloc1 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.184 [2024-11-18 10:38:42.051437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.184 [2024-11-18 10:38:42.051514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.184 [2024-11-18 10:38:42.051540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.184 [2024-11-18 10:38:42.051550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.184 [2024-11-18 10:38:42.053942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.184 [2024-11-18 10:38:42.053981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.184 pt1 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.184 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.445 malloc2 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.445 [2024-11-18 10:38:42.110825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.445 [2024-11-18 10:38:42.110990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.445 [2024-11-18 10:38:42.111033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.445 [2024-11-18 10:38:42.111063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.445 [2024-11-18 10:38:42.113493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.445 [2024-11-18 10:38:42.113561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.445 pt2 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.445 malloc3 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.445 [2024-11-18 10:38:42.184282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.445 [2024-11-18 10:38:42.184380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.445 [2024-11-18 10:38:42.184419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.445 [2024-11-18 10:38:42.184448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.445 [2024-11-18 10:38:42.186731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.445 [2024-11-18 10:38:42.186795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.445 pt3 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.445 malloc4 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.445 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.445 [2024-11-18 10:38:42.247768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:16.445 [2024-11-18 10:38:42.247816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.445 [2024-11-18 10:38:42.247834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:16.445 [2024-11-18 10:38:42.247843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.445 [2024-11-18 10:38:42.250087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.445 [2024-11-18 10:38:42.250120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:16.445 pt4 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.446 [2024-11-18 10:38:42.259790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.446 [2024-11-18 10:38:42.261830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.446 [2024-11-18 10:38:42.261893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.446 [2024-11-18 10:38:42.261954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:16.446 [2024-11-18 10:38:42.262131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.446 [2024-11-18 10:38:42.262142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.446 [2024-11-18 10:38:42.262402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:16.446 [2024-11-18 10:38:42.262568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.446 [2024-11-18 10:38:42.262581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.446 [2024-11-18 10:38:42.262723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.446 "name": "raid_bdev1", 00:10:16.446 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:16.446 "strip_size_kb": 64, 00:10:16.446 "state": "online", 00:10:16.446 "raid_level": "raid0", 00:10:16.446 "superblock": true, 00:10:16.446 "num_base_bdevs": 4, 00:10:16.446 "num_base_bdevs_discovered": 4, 00:10:16.446 "num_base_bdevs_operational": 4, 00:10:16.446 "base_bdevs_list": [ 00:10:16.446 { 00:10:16.446 "name": "pt1", 00:10:16.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.446 "is_configured": true, 00:10:16.446 "data_offset": 2048, 00:10:16.446 "data_size": 63488 00:10:16.446 }, 00:10:16.446 { 00:10:16.446 "name": "pt2", 00:10:16.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.446 "is_configured": true, 00:10:16.446 "data_offset": 2048, 00:10:16.446 "data_size": 63488 00:10:16.446 }, 00:10:16.446 { 00:10:16.446 "name": "pt3", 00:10:16.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.446 "is_configured": true, 00:10:16.446 "data_offset": 2048, 00:10:16.446 "data_size": 63488 00:10:16.446 }, 00:10:16.446 { 00:10:16.446 "name": "pt4", 00:10:16.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.446 "is_configured": true, 00:10:16.446 "data_offset": 2048, 00:10:16.446 "data_size": 63488 00:10:16.446 } 00:10:16.446 ] 00:10:16.446 }' 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.446 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.055 [2024-11-18 10:38:42.699429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.055 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.056 "name": "raid_bdev1", 00:10:17.056 "aliases": [ 00:10:17.056 "c24fb416-89ed-4fbc-ae79-f0acabcb97b3" 00:10:17.056 ], 00:10:17.056 "product_name": "Raid Volume", 00:10:17.056 "block_size": 512, 00:10:17.056 "num_blocks": 253952, 00:10:17.056 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:17.056 "assigned_rate_limits": { 00:10:17.056 "rw_ios_per_sec": 0, 00:10:17.056 "rw_mbytes_per_sec": 0, 00:10:17.056 "r_mbytes_per_sec": 0, 00:10:17.056 "w_mbytes_per_sec": 0 00:10:17.056 }, 00:10:17.056 "claimed": false, 00:10:17.056 "zoned": false, 00:10:17.056 "supported_io_types": { 00:10:17.056 "read": true, 00:10:17.056 "write": true, 00:10:17.056 "unmap": true, 00:10:17.056 "flush": true, 00:10:17.056 "reset": true, 00:10:17.056 "nvme_admin": false, 00:10:17.056 "nvme_io": false, 00:10:17.056 "nvme_io_md": false, 00:10:17.056 "write_zeroes": true, 00:10:17.056 "zcopy": false, 00:10:17.056 "get_zone_info": false, 00:10:17.056 "zone_management": false, 00:10:17.056 "zone_append": false, 00:10:17.056 "compare": false, 00:10:17.056 "compare_and_write": false, 00:10:17.056 "abort": false, 00:10:17.056 "seek_hole": false, 00:10:17.056 "seek_data": false, 00:10:17.056 "copy": false, 00:10:17.056 "nvme_iov_md": false 00:10:17.056 }, 00:10:17.056 "memory_domains": [ 00:10:17.056 { 00:10:17.056 "dma_device_id": "system", 00:10:17.056 "dma_device_type": 1 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.056 "dma_device_type": 2 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "system", 00:10:17.056 "dma_device_type": 1 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.056 "dma_device_type": 2 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "system", 00:10:17.056 "dma_device_type": 1 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.056 "dma_device_type": 2 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "system", 00:10:17.056 "dma_device_type": 1 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.056 "dma_device_type": 2 00:10:17.056 } 00:10:17.056 ], 00:10:17.056 "driver_specific": { 00:10:17.056 "raid": { 00:10:17.056 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:17.056 "strip_size_kb": 64, 00:10:17.056 "state": "online", 00:10:17.056 "raid_level": "raid0", 00:10:17.056 "superblock": true, 00:10:17.056 "num_base_bdevs": 4, 00:10:17.056 "num_base_bdevs_discovered": 4, 00:10:17.056 "num_base_bdevs_operational": 4, 00:10:17.056 "base_bdevs_list": [ 00:10:17.056 { 00:10:17.056 "name": "pt1", 00:10:17.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.056 "is_configured": true, 00:10:17.056 "data_offset": 2048, 00:10:17.056 "data_size": 63488 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "name": "pt2", 00:10:17.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.056 "is_configured": true, 00:10:17.056 "data_offset": 2048, 00:10:17.056 "data_size": 63488 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "name": "pt3", 00:10:17.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.056 "is_configured": true, 00:10:17.056 "data_offset": 2048, 00:10:17.056 "data_size": 63488 00:10:17.056 }, 00:10:17.056 { 00:10:17.056 "name": "pt4", 00:10:17.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.056 "is_configured": true, 00:10:17.056 "data_offset": 2048, 00:10:17.056 "data_size": 63488 00:10:17.056 } 00:10:17.056 ] 00:10:17.056 } 00:10:17.056 } 00:10:17.056 }' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.056 pt2 00:10:17.056 pt3 00:10:17.056 pt4' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.056 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.316 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.316 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.317 10:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 [2024-11-18 10:38:43.006768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c24fb416-89ed-4fbc-ae79-f0acabcb97b3 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c24fb416-89ed-4fbc-ae79-f0acabcb97b3 ']' 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 [2024-11-18 10:38:43.050421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.317 [2024-11-18 10:38:43.050446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.317 [2024-11-18 10:38:43.050522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.317 [2024-11-18 10:38:43.050592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.317 [2024-11-18 10:38:43.050607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.317 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.317 [2024-11-18 10:38:43.198226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:17.577 [2024-11-18 10:38:43.200395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:17.577 [2024-11-18 10:38:43.200443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:17.577 [2024-11-18 10:38:43.200481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:17.577 [2024-11-18 10:38:43.200533] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:17.577 [2024-11-18 10:38:43.200583] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:17.577 [2024-11-18 10:38:43.200602] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:17.577 [2024-11-18 10:38:43.200621] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:17.577 [2024-11-18 10:38:43.200634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.577 [2024-11-18 10:38:43.200646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:17.577 request: 00:10:17.577 { 00:10:17.577 "name": "raid_bdev1", 00:10:17.577 "raid_level": "raid0", 00:10:17.577 "base_bdevs": [ 00:10:17.577 "malloc1", 00:10:17.577 "malloc2", 00:10:17.577 "malloc3", 00:10:17.577 "malloc4" 00:10:17.577 ], 00:10:17.577 "strip_size_kb": 64, 00:10:17.577 "superblock": false, 00:10:17.577 "method": "bdev_raid_create", 00:10:17.577 "req_id": 1 00:10:17.577 } 00:10:17.577 Got JSON-RPC error response 00:10:17.577 response: 00:10:17.577 { 00:10:17.577 "code": -17, 00:10:17.577 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:17.577 } 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.577 [2024-11-18 10:38:43.266064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.577 [2024-11-18 10:38:43.266154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.577 [2024-11-18 10:38:43.266199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:17.577 [2024-11-18 10:38:43.266230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.577 [2024-11-18 10:38:43.268601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.577 [2024-11-18 10:38:43.268684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.577 [2024-11-18 10:38:43.268774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.577 [2024-11-18 10:38:43.268860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.577 pt1 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.577 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.578 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.578 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.578 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.578 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.578 "name": "raid_bdev1", 00:10:17.578 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:17.578 "strip_size_kb": 64, 00:10:17.578 "state": "configuring", 00:10:17.578 "raid_level": "raid0", 00:10:17.578 "superblock": true, 00:10:17.578 "num_base_bdevs": 4, 00:10:17.578 "num_base_bdevs_discovered": 1, 00:10:17.578 "num_base_bdevs_operational": 4, 00:10:17.578 "base_bdevs_list": [ 00:10:17.578 { 00:10:17.578 "name": "pt1", 00:10:17.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.578 "is_configured": true, 00:10:17.578 "data_offset": 2048, 00:10:17.578 "data_size": 63488 00:10:17.578 }, 00:10:17.578 { 00:10:17.578 "name": null, 00:10:17.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.578 "is_configured": false, 00:10:17.578 "data_offset": 2048, 00:10:17.578 "data_size": 63488 00:10:17.578 }, 00:10:17.578 { 00:10:17.578 "name": null, 00:10:17.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.578 "is_configured": false, 00:10:17.578 "data_offset": 2048, 00:10:17.578 "data_size": 63488 00:10:17.578 }, 00:10:17.578 { 00:10:17.578 "name": null, 00:10:17.578 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.578 "is_configured": false, 00:10:17.578 "data_offset": 2048, 00:10:17.578 "data_size": 63488 00:10:17.578 } 00:10:17.578 ] 00:10:17.578 }' 00:10:17.578 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.578 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.147 [2024-11-18 10:38:43.745279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.147 [2024-11-18 10:38:43.745398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.147 [2024-11-18 10:38:43.745435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:18.147 [2024-11-18 10:38:43.745464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.147 [2024-11-18 10:38:43.745947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.147 [2024-11-18 10:38:43.746004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.147 [2024-11-18 10:38:43.746109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.147 [2024-11-18 10:38:43.746139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.147 pt2 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.147 [2024-11-18 10:38:43.753270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.147 "name": "raid_bdev1", 00:10:18.147 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:18.147 "strip_size_kb": 64, 00:10:18.147 "state": "configuring", 00:10:18.147 "raid_level": "raid0", 00:10:18.147 "superblock": true, 00:10:18.147 "num_base_bdevs": 4, 00:10:18.147 "num_base_bdevs_discovered": 1, 00:10:18.147 "num_base_bdevs_operational": 4, 00:10:18.147 "base_bdevs_list": [ 00:10:18.147 { 00:10:18.147 "name": "pt1", 00:10:18.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.147 "is_configured": true, 00:10:18.147 "data_offset": 2048, 00:10:18.147 "data_size": 63488 00:10:18.147 }, 00:10:18.147 { 00:10:18.147 "name": null, 00:10:18.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.147 "is_configured": false, 00:10:18.147 "data_offset": 0, 00:10:18.147 "data_size": 63488 00:10:18.147 }, 00:10:18.147 { 00:10:18.147 "name": null, 00:10:18.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.147 "is_configured": false, 00:10:18.147 "data_offset": 2048, 00:10:18.147 "data_size": 63488 00:10:18.147 }, 00:10:18.147 { 00:10:18.147 "name": null, 00:10:18.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.147 "is_configured": false, 00:10:18.147 "data_offset": 2048, 00:10:18.147 "data_size": 63488 00:10:18.147 } 00:10:18.147 ] 00:10:18.147 }' 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.147 10:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 [2024-11-18 10:38:44.196468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.408 [2024-11-18 10:38:44.196563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.408 [2024-11-18 10:38:44.196586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:18.408 [2024-11-18 10:38:44.196594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.408 [2024-11-18 10:38:44.197028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.408 [2024-11-18 10:38:44.197044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.408 [2024-11-18 10:38:44.197116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.408 [2024-11-18 10:38:44.197136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.408 pt2 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 [2024-11-18 10:38:44.208437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.408 [2024-11-18 10:38:44.208479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.408 [2024-11-18 10:38:44.208501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:18.408 [2024-11-18 10:38:44.208511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.408 [2024-11-18 10:38:44.208858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.408 [2024-11-18 10:38:44.208873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.408 [2024-11-18 10:38:44.208927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.408 [2024-11-18 10:38:44.208942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.408 pt3 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.408 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 [2024-11-18 10:38:44.220397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:18.409 [2024-11-18 10:38:44.220439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.409 [2024-11-18 10:38:44.220456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:18.409 [2024-11-18 10:38:44.220463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.409 [2024-11-18 10:38:44.220811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.409 [2024-11-18 10:38:44.220825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:18.409 [2024-11-18 10:38:44.220877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:18.409 [2024-11-18 10:38:44.220892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:18.409 [2024-11-18 10:38:44.221008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.409 [2024-11-18 10:38:44.221015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:18.409 [2024-11-18 10:38:44.221280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:18.409 [2024-11-18 10:38:44.221426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.409 [2024-11-18 10:38:44.221440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:18.409 [2024-11-18 10:38:44.221551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.409 pt4 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.409 "name": "raid_bdev1", 00:10:18.409 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:18.409 "strip_size_kb": 64, 00:10:18.409 "state": "online", 00:10:18.409 "raid_level": "raid0", 00:10:18.409 "superblock": true, 00:10:18.409 "num_base_bdevs": 4, 00:10:18.409 "num_base_bdevs_discovered": 4, 00:10:18.409 "num_base_bdevs_operational": 4, 00:10:18.409 "base_bdevs_list": [ 00:10:18.409 { 00:10:18.409 "name": "pt1", 00:10:18.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.409 "is_configured": true, 00:10:18.409 "data_offset": 2048, 00:10:18.409 "data_size": 63488 00:10:18.409 }, 00:10:18.409 { 00:10:18.409 "name": "pt2", 00:10:18.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.409 "is_configured": true, 00:10:18.409 "data_offset": 2048, 00:10:18.409 "data_size": 63488 00:10:18.409 }, 00:10:18.409 { 00:10:18.409 "name": "pt3", 00:10:18.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.409 "is_configured": true, 00:10:18.409 "data_offset": 2048, 00:10:18.409 "data_size": 63488 00:10:18.409 }, 00:10:18.409 { 00:10:18.409 "name": "pt4", 00:10:18.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.409 "is_configured": true, 00:10:18.409 "data_offset": 2048, 00:10:18.409 "data_size": 63488 00:10:18.409 } 00:10:18.409 ] 00:10:18.409 }' 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.409 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.979 [2024-11-18 10:38:44.655971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.979 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.979 "name": "raid_bdev1", 00:10:18.979 "aliases": [ 00:10:18.979 "c24fb416-89ed-4fbc-ae79-f0acabcb97b3" 00:10:18.979 ], 00:10:18.979 "product_name": "Raid Volume", 00:10:18.979 "block_size": 512, 00:10:18.979 "num_blocks": 253952, 00:10:18.979 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:18.979 "assigned_rate_limits": { 00:10:18.979 "rw_ios_per_sec": 0, 00:10:18.979 "rw_mbytes_per_sec": 0, 00:10:18.979 "r_mbytes_per_sec": 0, 00:10:18.979 "w_mbytes_per_sec": 0 00:10:18.979 }, 00:10:18.979 "claimed": false, 00:10:18.979 "zoned": false, 00:10:18.979 "supported_io_types": { 00:10:18.979 "read": true, 00:10:18.979 "write": true, 00:10:18.979 "unmap": true, 00:10:18.979 "flush": true, 00:10:18.979 "reset": true, 00:10:18.979 "nvme_admin": false, 00:10:18.979 "nvme_io": false, 00:10:18.979 "nvme_io_md": false, 00:10:18.979 "write_zeroes": true, 00:10:18.979 "zcopy": false, 00:10:18.979 "get_zone_info": false, 00:10:18.979 "zone_management": false, 00:10:18.979 "zone_append": false, 00:10:18.979 "compare": false, 00:10:18.979 "compare_and_write": false, 00:10:18.979 "abort": false, 00:10:18.979 "seek_hole": false, 00:10:18.979 "seek_data": false, 00:10:18.979 "copy": false, 00:10:18.979 "nvme_iov_md": false 00:10:18.979 }, 00:10:18.979 "memory_domains": [ 00:10:18.979 { 00:10:18.979 "dma_device_id": "system", 00:10:18.979 "dma_device_type": 1 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.979 "dma_device_type": 2 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "system", 00:10:18.979 "dma_device_type": 1 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.979 "dma_device_type": 2 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "system", 00:10:18.979 "dma_device_type": 1 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.979 "dma_device_type": 2 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "system", 00:10:18.979 "dma_device_type": 1 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.979 "dma_device_type": 2 00:10:18.979 } 00:10:18.979 ], 00:10:18.979 "driver_specific": { 00:10:18.979 "raid": { 00:10:18.979 "uuid": "c24fb416-89ed-4fbc-ae79-f0acabcb97b3", 00:10:18.979 "strip_size_kb": 64, 00:10:18.979 "state": "online", 00:10:18.979 "raid_level": "raid0", 00:10:18.979 "superblock": true, 00:10:18.979 "num_base_bdevs": 4, 00:10:18.979 "num_base_bdevs_discovered": 4, 00:10:18.979 "num_base_bdevs_operational": 4, 00:10:18.979 "base_bdevs_list": [ 00:10:18.979 { 00:10:18.979 "name": "pt1", 00:10:18.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.979 "is_configured": true, 00:10:18.979 "data_offset": 2048, 00:10:18.979 "data_size": 63488 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "name": "pt2", 00:10:18.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.979 "is_configured": true, 00:10:18.979 "data_offset": 2048, 00:10:18.979 "data_size": 63488 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "name": "pt3", 00:10:18.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.979 "is_configured": true, 00:10:18.979 "data_offset": 2048, 00:10:18.979 "data_size": 63488 00:10:18.979 }, 00:10:18.979 { 00:10:18.979 "name": "pt4", 00:10:18.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.979 "is_configured": true, 00:10:18.979 "data_offset": 2048, 00:10:18.980 "data_size": 63488 00:10:18.980 } 00:10:18.980 ] 00:10:18.980 } 00:10:18.980 } 00:10:18.980 }' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.980 pt2 00:10:18.980 pt3 00:10:18.980 pt4' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.980 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.240 [2024-11-18 10:38:44.971378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.240 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c24fb416-89ed-4fbc-ae79-f0acabcb97b3 '!=' c24fb416-89ed-4fbc-ae79-f0acabcb97b3 ']' 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70580 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70580 ']' 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70580 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70580 00:10:19.240 killing process with pid 70580 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70580' 00:10:19.240 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70580 00:10:19.240 [2024-11-18 10:38:45.053603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.240 [2024-11-18 10:38:45.053676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.241 [2024-11-18 10:38:45.053745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.241 [2024-11-18 10:38:45.053754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:19.241 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70580 00:10:19.811 [2024-11-18 10:38:45.464456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.752 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:20.752 00:10:20.752 real 0m5.515s 00:10:20.752 user 0m7.753s 00:10:20.752 sys 0m1.043s 00:10:20.752 ************************************ 00:10:20.752 END TEST raid_superblock_test 00:10:20.752 ************************************ 00:10:20.752 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.752 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.012 10:38:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:21.012 10:38:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.012 10:38:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.012 10:38:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.012 ************************************ 00:10:21.012 START TEST raid_read_error_test 00:10:21.012 ************************************ 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.012 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LQSZQNz9i1 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70840 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70840 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70840 ']' 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.013 10:38:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.013 [2024-11-18 10:38:46.789399] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:21.013 [2024-11-18 10:38:46.789570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70840 ] 00:10:21.273 [2024-11-18 10:38:46.963830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.273 [2024-11-18 10:38:47.094937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.532 [2024-11-18 10:38:47.324361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.532 [2024-11-18 10:38:47.324423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.792 BaseBdev1_malloc 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.792 true 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.792 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.792 [2024-11-18 10:38:47.671400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.792 [2024-11-18 10:38:47.671461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.792 [2024-11-18 10:38:47.671484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.792 [2024-11-18 10:38:47.671495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.792 [2024-11-18 10:38:47.673878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.792 [2024-11-18 10:38:47.674000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.053 BaseBdev1 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.053 BaseBdev2_malloc 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.053 true 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.053 [2024-11-18 10:38:47.741859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.053 [2024-11-18 10:38:47.741912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.053 [2024-11-18 10:38:47.741929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.053 [2024-11-18 10:38:47.741940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.053 [2024-11-18 10:38:47.744265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.053 [2024-11-18 10:38:47.744387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.053 BaseBdev2 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.053 BaseBdev3_malloc 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.053 true 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.053 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.053 [2024-11-18 10:38:47.821455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.053 [2024-11-18 10:38:47.821504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.053 [2024-11-18 10:38:47.821523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.053 [2024-11-18 10:38:47.821534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.054 [2024-11-18 10:38:47.823910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.054 [2024-11-18 10:38:47.824016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.054 BaseBdev3 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.054 BaseBdev4_malloc 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.054 true 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.054 [2024-11-18 10:38:47.893743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:22.054 [2024-11-18 10:38:47.893791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.054 [2024-11-18 10:38:47.893808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.054 [2024-11-18 10:38:47.893819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.054 [2024-11-18 10:38:47.896122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.054 [2024-11-18 10:38:47.896159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:22.054 BaseBdev4 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.054 [2024-11-18 10:38:47.905787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.054 [2024-11-18 10:38:47.907872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.054 [2024-11-18 10:38:47.907943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.054 [2024-11-18 10:38:47.908006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.054 [2024-11-18 10:38:47.908240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:22.054 [2024-11-18 10:38:47.908256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.054 [2024-11-18 10:38:47.908489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:22.054 [2024-11-18 10:38:47.908661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:22.054 [2024-11-18 10:38:47.908672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:22.054 [2024-11-18 10:38:47.908825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.054 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.313 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.313 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.313 "name": "raid_bdev1", 00:10:22.313 "uuid": "45331cd8-0564-4821-b03d-752b13424ff7", 00:10:22.313 "strip_size_kb": 64, 00:10:22.313 "state": "online", 00:10:22.313 "raid_level": "raid0", 00:10:22.313 "superblock": true, 00:10:22.313 "num_base_bdevs": 4, 00:10:22.313 "num_base_bdevs_discovered": 4, 00:10:22.313 "num_base_bdevs_operational": 4, 00:10:22.313 "base_bdevs_list": [ 00:10:22.313 { 00:10:22.313 "name": "BaseBdev1", 00:10:22.313 "uuid": "f965720f-a6e1-5e74-84a8-5d10e2e106e8", 00:10:22.313 "is_configured": true, 00:10:22.313 "data_offset": 2048, 00:10:22.313 "data_size": 63488 00:10:22.313 }, 00:10:22.313 { 00:10:22.313 "name": "BaseBdev2", 00:10:22.313 "uuid": "00dee066-d871-5ba0-9065-4710f2f29ef8", 00:10:22.313 "is_configured": true, 00:10:22.313 "data_offset": 2048, 00:10:22.313 "data_size": 63488 00:10:22.313 }, 00:10:22.313 { 00:10:22.313 "name": "BaseBdev3", 00:10:22.313 "uuid": "45d82499-44b1-5aec-90ad-d45188f41cd9", 00:10:22.313 "is_configured": true, 00:10:22.313 "data_offset": 2048, 00:10:22.313 "data_size": 63488 00:10:22.313 }, 00:10:22.313 { 00:10:22.313 "name": "BaseBdev4", 00:10:22.313 "uuid": "d1f45811-5089-5fa8-9231-d33ce1648f2a", 00:10:22.313 "is_configured": true, 00:10:22.313 "data_offset": 2048, 00:10:22.313 "data_size": 63488 00:10:22.313 } 00:10:22.313 ] 00:10:22.313 }' 00:10:22.313 10:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.313 10:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.573 10:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.573 10:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.573 [2024-11-18 10:38:48.422340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.511 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.771 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.771 "name": "raid_bdev1", 00:10:23.771 "uuid": "45331cd8-0564-4821-b03d-752b13424ff7", 00:10:23.771 "strip_size_kb": 64, 00:10:23.771 "state": "online", 00:10:23.771 "raid_level": "raid0", 00:10:23.771 "superblock": true, 00:10:23.771 "num_base_bdevs": 4, 00:10:23.771 "num_base_bdevs_discovered": 4, 00:10:23.771 "num_base_bdevs_operational": 4, 00:10:23.771 "base_bdevs_list": [ 00:10:23.771 { 00:10:23.771 "name": "BaseBdev1", 00:10:23.771 "uuid": "f965720f-a6e1-5e74-84a8-5d10e2e106e8", 00:10:23.771 "is_configured": true, 00:10:23.771 "data_offset": 2048, 00:10:23.771 "data_size": 63488 00:10:23.771 }, 00:10:23.771 { 00:10:23.771 "name": "BaseBdev2", 00:10:23.771 "uuid": "00dee066-d871-5ba0-9065-4710f2f29ef8", 00:10:23.771 "is_configured": true, 00:10:23.771 "data_offset": 2048, 00:10:23.771 "data_size": 63488 00:10:23.771 }, 00:10:23.771 { 00:10:23.771 "name": "BaseBdev3", 00:10:23.771 "uuid": "45d82499-44b1-5aec-90ad-d45188f41cd9", 00:10:23.771 "is_configured": true, 00:10:23.771 "data_offset": 2048, 00:10:23.771 "data_size": 63488 00:10:23.771 }, 00:10:23.771 { 00:10:23.771 "name": "BaseBdev4", 00:10:23.771 "uuid": "d1f45811-5089-5fa8-9231-d33ce1648f2a", 00:10:23.771 "is_configured": true, 00:10:23.771 "data_offset": 2048, 00:10:23.771 "data_size": 63488 00:10:23.771 } 00:10:23.771 ] 00:10:23.771 }' 00:10:23.771 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.771 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.031 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.031 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.032 [2024-11-18 10:38:49.806967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.032 [2024-11-18 10:38:49.807100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.032 [2024-11-18 10:38:49.809643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.032 [2024-11-18 10:38:49.809745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.032 [2024-11-18 10:38:49.809817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.032 [2024-11-18 10:38:49.809861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:24.032 { 00:10:24.032 "results": [ 00:10:24.032 { 00:10:24.032 "job": "raid_bdev1", 00:10:24.032 "core_mask": "0x1", 00:10:24.032 "workload": "randrw", 00:10:24.032 "percentage": 50, 00:10:24.032 "status": "finished", 00:10:24.032 "queue_depth": 1, 00:10:24.032 "io_size": 131072, 00:10:24.032 "runtime": 1.385387, 00:10:24.032 "iops": 14473.212178257772, 00:10:24.032 "mibps": 1809.1515222822215, 00:10:24.032 "io_failed": 1, 00:10:24.032 "io_timeout": 0, 00:10:24.032 "avg_latency_us": 97.43545733059112, 00:10:24.032 "min_latency_us": 24.817467248908297, 00:10:24.032 "max_latency_us": 1402.2986899563318 00:10:24.032 } 00:10:24.032 ], 00:10:24.032 "core_count": 1 00:10:24.032 } 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70840 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70840 ']' 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70840 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70840 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70840' 00:10:24.032 killing process with pid 70840 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70840 00:10:24.032 [2024-11-18 10:38:49.848830] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.032 10:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70840 00:10:24.600 [2024-11-18 10:38:50.185419] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LQSZQNz9i1 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:25.547 00:10:25.547 real 0m4.715s 00:10:25.547 user 0m5.426s 00:10:25.547 sys 0m0.687s 00:10:25.547 ************************************ 00:10:25.547 END TEST raid_read_error_test 00:10:25.547 ************************************ 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.547 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.824 10:38:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:25.824 10:38:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.824 10:38:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.824 10:38:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.824 ************************************ 00:10:25.824 START TEST raid_write_error_test 00:10:25.824 ************************************ 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qc8Ph18bHq 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70991 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70991 00:10:25.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70991 ']' 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.824 10:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.824 [2024-11-18 10:38:51.581165] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:25.824 [2024-11-18 10:38:51.581283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:10:26.084 [2024-11-18 10:38:51.755665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.084 [2024-11-18 10:38:51.882886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.343 [2024-11-18 10:38:52.115722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.343 [2024-11-18 10:38:52.115787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.602 BaseBdev1_malloc 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.602 true 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.602 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.602 [2024-11-18 10:38:52.452118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.602 [2024-11-18 10:38:52.452192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.602 [2024-11-18 10:38:52.452213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.603 [2024-11-18 10:38:52.452225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.603 [2024-11-18 10:38:52.454517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.603 [2024-11-18 10:38:52.454553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.603 BaseBdev1 00:10:26.603 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.603 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.603 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.603 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.603 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 BaseBdev2_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 true 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-18 10:38:52.520205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.863 [2024-11-18 10:38:52.520256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.863 [2024-11-18 10:38:52.520273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.863 [2024-11-18 10:38:52.520283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.863 [2024-11-18 10:38:52.522472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.863 [2024-11-18 10:38:52.522507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.863 BaseBdev2 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 BaseBdev3_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 true 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-18 10:38:52.624633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:26.863 [2024-11-18 10:38:52.624684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.863 [2024-11-18 10:38:52.624700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:26.863 [2024-11-18 10:38:52.624711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.863 [2024-11-18 10:38:52.627103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.863 [2024-11-18 10:38:52.627184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:26.863 BaseBdev3 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 BaseBdev4_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 true 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-18 10:38:52.696066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:26.863 [2024-11-18 10:38:52.696118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.863 [2024-11-18 10:38:52.696136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:26.863 [2024-11-18 10:38:52.696147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.863 [2024-11-18 10:38:52.698390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.863 [2024-11-18 10:38:52.698480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:26.863 BaseBdev4 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-18 10:38:52.708105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.863 [2024-11-18 10:38:52.710096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.863 [2024-11-18 10:38:52.710218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.863 [2024-11-18 10:38:52.710288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.863 [2024-11-18 10:38:52.710491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:26.863 [2024-11-18 10:38:52.710507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.863 [2024-11-18 10:38:52.710726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:26.863 [2024-11-18 10:38:52.710874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:26.863 [2024-11-18 10:38:52.710884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:26.863 [2024-11-18 10:38:52.711042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.123 "name": "raid_bdev1", 00:10:27.123 "uuid": "b4966229-43dd-49e8-89b0-e1dc452fe601", 00:10:27.123 "strip_size_kb": 64, 00:10:27.123 "state": "online", 00:10:27.123 "raid_level": "raid0", 00:10:27.123 "superblock": true, 00:10:27.123 "num_base_bdevs": 4, 00:10:27.123 "num_base_bdevs_discovered": 4, 00:10:27.123 "num_base_bdevs_operational": 4, 00:10:27.123 "base_bdevs_list": [ 00:10:27.123 { 00:10:27.123 "name": "BaseBdev1", 00:10:27.123 "uuid": "a5eda243-cd04-56e5-b5d1-7d56040d712a", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 }, 00:10:27.123 { 00:10:27.123 "name": "BaseBdev2", 00:10:27.123 "uuid": "673b9b54-c495-58c9-ba87-2818d3738bf1", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 }, 00:10:27.123 { 00:10:27.123 "name": "BaseBdev3", 00:10:27.123 "uuid": "a645578e-6e9e-5711-8b3c-d9a5f9f61bc5", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 }, 00:10:27.123 { 00:10:27.123 "name": "BaseBdev4", 00:10:27.123 "uuid": "cf4a5ed4-60aa-5347-9ea5-a9a88deb2684", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 } 00:10:27.123 ] 00:10:27.123 }' 00:10:27.123 10:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.123 10:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.383 10:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.383 10:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.383 [2024-11-18 10:38:53.232301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.322 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.582 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.582 "name": "raid_bdev1", 00:10:28.582 "uuid": "b4966229-43dd-49e8-89b0-e1dc452fe601", 00:10:28.582 "strip_size_kb": 64, 00:10:28.582 "state": "online", 00:10:28.582 "raid_level": "raid0", 00:10:28.582 "superblock": true, 00:10:28.582 "num_base_bdevs": 4, 00:10:28.582 "num_base_bdevs_discovered": 4, 00:10:28.582 "num_base_bdevs_operational": 4, 00:10:28.582 "base_bdevs_list": [ 00:10:28.582 { 00:10:28.582 "name": "BaseBdev1", 00:10:28.582 "uuid": "a5eda243-cd04-56e5-b5d1-7d56040d712a", 00:10:28.582 "is_configured": true, 00:10:28.582 "data_offset": 2048, 00:10:28.582 "data_size": 63488 00:10:28.582 }, 00:10:28.582 { 00:10:28.582 "name": "BaseBdev2", 00:10:28.582 "uuid": "673b9b54-c495-58c9-ba87-2818d3738bf1", 00:10:28.582 "is_configured": true, 00:10:28.582 "data_offset": 2048, 00:10:28.582 "data_size": 63488 00:10:28.582 }, 00:10:28.582 { 00:10:28.582 "name": "BaseBdev3", 00:10:28.582 "uuid": "a645578e-6e9e-5711-8b3c-d9a5f9f61bc5", 00:10:28.582 "is_configured": true, 00:10:28.582 "data_offset": 2048, 00:10:28.582 "data_size": 63488 00:10:28.582 }, 00:10:28.582 { 00:10:28.582 "name": "BaseBdev4", 00:10:28.582 "uuid": "cf4a5ed4-60aa-5347-9ea5-a9a88deb2684", 00:10:28.582 "is_configured": true, 00:10:28.582 "data_offset": 2048, 00:10:28.582 "data_size": 63488 00:10:28.582 } 00:10:28.582 ] 00:10:28.582 }' 00:10:28.582 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.582 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.842 [2024-11-18 10:38:54.664766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.842 [2024-11-18 10:38:54.664810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.842 [2024-11-18 10:38:54.667401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.842 [2024-11-18 10:38:54.667506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.842 [2024-11-18 10:38:54.667584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.842 [2024-11-18 10:38:54.667633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:28.842 { 00:10:28.842 "results": [ 00:10:28.842 { 00:10:28.842 "job": "raid_bdev1", 00:10:28.842 "core_mask": "0x1", 00:10:28.842 "workload": "randrw", 00:10:28.842 "percentage": 50, 00:10:28.842 "status": "finished", 00:10:28.842 "queue_depth": 1, 00:10:28.842 "io_size": 131072, 00:10:28.842 "runtime": 1.43331, 00:10:28.842 "iops": 14526.515547927524, 00:10:28.842 "mibps": 1815.8144434909404, 00:10:28.842 "io_failed": 1, 00:10:28.842 "io_timeout": 0, 00:10:28.842 "avg_latency_us": 97.06973720690955, 00:10:28.842 "min_latency_us": 24.482096069868994, 00:10:28.842 "max_latency_us": 1345.0620087336245 00:10:28.842 } 00:10:28.842 ], 00:10:28.842 "core_count": 1 00:10:28.842 } 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70991 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70991 ']' 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70991 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70991 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70991' 00:10:28.842 killing process with pid 70991 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70991 00:10:28.842 [2024-11-18 10:38:54.717548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.842 10:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70991 00:10:29.413 [2024-11-18 10:38:55.053068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qc8Ph18bHq 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:30.795 00:10:30.795 real 0m4.787s 00:10:30.795 user 0m5.507s 00:10:30.795 sys 0m0.706s 00:10:30.795 ************************************ 00:10:30.795 END TEST raid_write_error_test 00:10:30.795 ************************************ 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.795 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.795 10:38:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.795 10:38:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:30.795 10:38:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.795 10:38:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.795 10:38:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.795 ************************************ 00:10:30.795 START TEST raid_state_function_test 00:10:30.795 ************************************ 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71136 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.795 Process raid pid: 71136 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71136' 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71136 00:10:30.795 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71136 ']' 00:10:30.796 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.796 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.796 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.796 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.796 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.796 [2024-11-18 10:38:56.437603] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:30.796 [2024-11-18 10:38:56.437793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.796 [2024-11-18 10:38:56.612751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.055 [2024-11-18 10:38:56.749135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.313 [2024-11-18 10:38:56.977940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.313 [2024-11-18 10:38:56.977985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.573 [2024-11-18 10:38:57.255226] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.573 [2024-11-18 10:38:57.255287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.573 [2024-11-18 10:38:57.255297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.573 [2024-11-18 10:38:57.255311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.573 [2024-11-18 10:38:57.255318] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.573 [2024-11-18 10:38:57.255328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.573 [2024-11-18 10:38:57.255334] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.573 [2024-11-18 10:38:57.255343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.573 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.573 "name": "Existed_Raid", 00:10:31.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.573 "strip_size_kb": 64, 00:10:31.573 "state": "configuring", 00:10:31.573 "raid_level": "concat", 00:10:31.573 "superblock": false, 00:10:31.573 "num_base_bdevs": 4, 00:10:31.573 "num_base_bdevs_discovered": 0, 00:10:31.573 "num_base_bdevs_operational": 4, 00:10:31.573 "base_bdevs_list": [ 00:10:31.573 { 00:10:31.573 "name": "BaseBdev1", 00:10:31.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.574 "is_configured": false, 00:10:31.574 "data_offset": 0, 00:10:31.574 "data_size": 0 00:10:31.574 }, 00:10:31.574 { 00:10:31.574 "name": "BaseBdev2", 00:10:31.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.574 "is_configured": false, 00:10:31.574 "data_offset": 0, 00:10:31.574 "data_size": 0 00:10:31.574 }, 00:10:31.574 { 00:10:31.574 "name": "BaseBdev3", 00:10:31.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.574 "is_configured": false, 00:10:31.574 "data_offset": 0, 00:10:31.574 "data_size": 0 00:10:31.574 }, 00:10:31.574 { 00:10:31.574 "name": "BaseBdev4", 00:10:31.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.574 "is_configured": false, 00:10:31.574 "data_offset": 0, 00:10:31.574 "data_size": 0 00:10:31.574 } 00:10:31.574 ] 00:10:31.574 }' 00:10:31.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.833 [2024-11-18 10:38:57.690384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.833 [2024-11-18 10:38:57.690495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.833 [2024-11-18 10:38:57.702371] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.833 [2024-11-18 10:38:57.702460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.833 [2024-11-18 10:38:57.702486] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.833 [2024-11-18 10:38:57.702508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.833 [2024-11-18 10:38:57.702525] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.833 [2024-11-18 10:38:57.702544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.833 [2024-11-18 10:38:57.702560] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.833 [2024-11-18 10:38:57.702580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.833 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.093 [2024-11-18 10:38:57.754308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.093 BaseBdev1 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.093 [ 00:10:32.093 { 00:10:32.093 "name": "BaseBdev1", 00:10:32.093 "aliases": [ 00:10:32.093 "5aab412d-3f55-4304-abc4-528593938268" 00:10:32.093 ], 00:10:32.093 "product_name": "Malloc disk", 00:10:32.093 "block_size": 512, 00:10:32.093 "num_blocks": 65536, 00:10:32.093 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:32.093 "assigned_rate_limits": { 00:10:32.093 "rw_ios_per_sec": 0, 00:10:32.093 "rw_mbytes_per_sec": 0, 00:10:32.093 "r_mbytes_per_sec": 0, 00:10:32.093 "w_mbytes_per_sec": 0 00:10:32.093 }, 00:10:32.093 "claimed": true, 00:10:32.093 "claim_type": "exclusive_write", 00:10:32.093 "zoned": false, 00:10:32.093 "supported_io_types": { 00:10:32.093 "read": true, 00:10:32.093 "write": true, 00:10:32.093 "unmap": true, 00:10:32.093 "flush": true, 00:10:32.093 "reset": true, 00:10:32.093 "nvme_admin": false, 00:10:32.093 "nvme_io": false, 00:10:32.093 "nvme_io_md": false, 00:10:32.093 "write_zeroes": true, 00:10:32.093 "zcopy": true, 00:10:32.093 "get_zone_info": false, 00:10:32.093 "zone_management": false, 00:10:32.093 "zone_append": false, 00:10:32.093 "compare": false, 00:10:32.093 "compare_and_write": false, 00:10:32.093 "abort": true, 00:10:32.093 "seek_hole": false, 00:10:32.093 "seek_data": false, 00:10:32.093 "copy": true, 00:10:32.093 "nvme_iov_md": false 00:10:32.093 }, 00:10:32.093 "memory_domains": [ 00:10:32.093 { 00:10:32.093 "dma_device_id": "system", 00:10:32.093 "dma_device_type": 1 00:10:32.093 }, 00:10:32.093 { 00:10:32.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.093 "dma_device_type": 2 00:10:32.093 } 00:10:32.093 ], 00:10:32.093 "driver_specific": {} 00:10:32.093 } 00:10:32.093 ] 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.093 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.094 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.094 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.094 "name": "Existed_Raid", 00:10:32.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.094 "strip_size_kb": 64, 00:10:32.094 "state": "configuring", 00:10:32.094 "raid_level": "concat", 00:10:32.094 "superblock": false, 00:10:32.094 "num_base_bdevs": 4, 00:10:32.094 "num_base_bdevs_discovered": 1, 00:10:32.094 "num_base_bdevs_operational": 4, 00:10:32.094 "base_bdevs_list": [ 00:10:32.094 { 00:10:32.094 "name": "BaseBdev1", 00:10:32.094 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:32.094 "is_configured": true, 00:10:32.094 "data_offset": 0, 00:10:32.094 "data_size": 65536 00:10:32.094 }, 00:10:32.094 { 00:10:32.094 "name": "BaseBdev2", 00:10:32.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.094 "is_configured": false, 00:10:32.094 "data_offset": 0, 00:10:32.094 "data_size": 0 00:10:32.094 }, 00:10:32.094 { 00:10:32.094 "name": "BaseBdev3", 00:10:32.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.094 "is_configured": false, 00:10:32.094 "data_offset": 0, 00:10:32.094 "data_size": 0 00:10:32.094 }, 00:10:32.094 { 00:10:32.094 "name": "BaseBdev4", 00:10:32.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.094 "is_configured": false, 00:10:32.094 "data_offset": 0, 00:10:32.094 "data_size": 0 00:10:32.094 } 00:10:32.094 ] 00:10:32.094 }' 00:10:32.094 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.094 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.353 [2024-11-18 10:38:58.213560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.353 [2024-11-18 10:38:58.213656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.353 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.353 [2024-11-18 10:38:58.225599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.354 [2024-11-18 10:38:58.227652] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.354 [2024-11-18 10:38:58.227693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.354 [2024-11-18 10:38:58.227704] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.354 [2024-11-18 10:38:58.227714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.354 [2024-11-18 10:38:58.227720] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.354 [2024-11-18 10:38:58.227728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.354 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.613 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.613 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.613 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.613 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.613 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.613 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.613 "name": "Existed_Raid", 00:10:32.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.613 "strip_size_kb": 64, 00:10:32.613 "state": "configuring", 00:10:32.614 "raid_level": "concat", 00:10:32.614 "superblock": false, 00:10:32.614 "num_base_bdevs": 4, 00:10:32.614 "num_base_bdevs_discovered": 1, 00:10:32.614 "num_base_bdevs_operational": 4, 00:10:32.614 "base_bdevs_list": [ 00:10:32.614 { 00:10:32.614 "name": "BaseBdev1", 00:10:32.614 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:32.614 "is_configured": true, 00:10:32.614 "data_offset": 0, 00:10:32.614 "data_size": 65536 00:10:32.614 }, 00:10:32.614 { 00:10:32.614 "name": "BaseBdev2", 00:10:32.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.614 "is_configured": false, 00:10:32.614 "data_offset": 0, 00:10:32.614 "data_size": 0 00:10:32.614 }, 00:10:32.614 { 00:10:32.614 "name": "BaseBdev3", 00:10:32.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.614 "is_configured": false, 00:10:32.614 "data_offset": 0, 00:10:32.614 "data_size": 0 00:10:32.614 }, 00:10:32.614 { 00:10:32.614 "name": "BaseBdev4", 00:10:32.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.614 "is_configured": false, 00:10:32.614 "data_offset": 0, 00:10:32.614 "data_size": 0 00:10:32.614 } 00:10:32.614 ] 00:10:32.614 }' 00:10:32.614 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.614 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.874 [2024-11-18 10:38:58.720750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.874 BaseBdev2 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.874 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.874 [ 00:10:32.874 { 00:10:32.874 "name": "BaseBdev2", 00:10:32.874 "aliases": [ 00:10:32.874 "27b72bc7-2c25-48de-bb41-93b65da17f09" 00:10:32.874 ], 00:10:32.874 "product_name": "Malloc disk", 00:10:32.874 "block_size": 512, 00:10:32.874 "num_blocks": 65536, 00:10:32.874 "uuid": "27b72bc7-2c25-48de-bb41-93b65da17f09", 00:10:32.874 "assigned_rate_limits": { 00:10:32.874 "rw_ios_per_sec": 0, 00:10:32.874 "rw_mbytes_per_sec": 0, 00:10:32.874 "r_mbytes_per_sec": 0, 00:10:32.874 "w_mbytes_per_sec": 0 00:10:32.874 }, 00:10:32.874 "claimed": true, 00:10:32.874 "claim_type": "exclusive_write", 00:10:32.874 "zoned": false, 00:10:32.874 "supported_io_types": { 00:10:32.874 "read": true, 00:10:32.874 "write": true, 00:10:32.874 "unmap": true, 00:10:32.874 "flush": true, 00:10:32.874 "reset": true, 00:10:32.874 "nvme_admin": false, 00:10:32.874 "nvme_io": false, 00:10:32.874 "nvme_io_md": false, 00:10:32.874 "write_zeroes": true, 00:10:32.874 "zcopy": true, 00:10:32.874 "get_zone_info": false, 00:10:33.134 "zone_management": false, 00:10:33.134 "zone_append": false, 00:10:33.134 "compare": false, 00:10:33.134 "compare_and_write": false, 00:10:33.134 "abort": true, 00:10:33.134 "seek_hole": false, 00:10:33.134 "seek_data": false, 00:10:33.134 "copy": true, 00:10:33.134 "nvme_iov_md": false 00:10:33.134 }, 00:10:33.134 "memory_domains": [ 00:10:33.134 { 00:10:33.134 "dma_device_id": "system", 00:10:33.134 "dma_device_type": 1 00:10:33.134 }, 00:10:33.134 { 00:10:33.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.134 "dma_device_type": 2 00:10:33.134 } 00:10:33.134 ], 00:10:33.134 "driver_specific": {} 00:10:33.134 } 00:10:33.134 ] 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.134 "name": "Existed_Raid", 00:10:33.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.134 "strip_size_kb": 64, 00:10:33.134 "state": "configuring", 00:10:33.134 "raid_level": "concat", 00:10:33.134 "superblock": false, 00:10:33.134 "num_base_bdevs": 4, 00:10:33.134 "num_base_bdevs_discovered": 2, 00:10:33.134 "num_base_bdevs_operational": 4, 00:10:33.134 "base_bdevs_list": [ 00:10:33.134 { 00:10:33.134 "name": "BaseBdev1", 00:10:33.134 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:33.134 "is_configured": true, 00:10:33.134 "data_offset": 0, 00:10:33.134 "data_size": 65536 00:10:33.134 }, 00:10:33.134 { 00:10:33.134 "name": "BaseBdev2", 00:10:33.134 "uuid": "27b72bc7-2c25-48de-bb41-93b65da17f09", 00:10:33.134 "is_configured": true, 00:10:33.134 "data_offset": 0, 00:10:33.134 "data_size": 65536 00:10:33.134 }, 00:10:33.134 { 00:10:33.134 "name": "BaseBdev3", 00:10:33.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.134 "is_configured": false, 00:10:33.134 "data_offset": 0, 00:10:33.134 "data_size": 0 00:10:33.134 }, 00:10:33.134 { 00:10:33.134 "name": "BaseBdev4", 00:10:33.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.134 "is_configured": false, 00:10:33.134 "data_offset": 0, 00:10:33.134 "data_size": 0 00:10:33.134 } 00:10:33.134 ] 00:10:33.134 }' 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.134 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.395 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.395 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.395 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.654 [2024-11-18 10:38:59.296019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.654 BaseBdev3 00:10:33.654 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.654 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 [ 00:10:33.655 { 00:10:33.655 "name": "BaseBdev3", 00:10:33.655 "aliases": [ 00:10:33.655 "d2d6a45b-a585-4584-a293-1ffecf6b693e" 00:10:33.655 ], 00:10:33.655 "product_name": "Malloc disk", 00:10:33.655 "block_size": 512, 00:10:33.655 "num_blocks": 65536, 00:10:33.655 "uuid": "d2d6a45b-a585-4584-a293-1ffecf6b693e", 00:10:33.655 "assigned_rate_limits": { 00:10:33.655 "rw_ios_per_sec": 0, 00:10:33.655 "rw_mbytes_per_sec": 0, 00:10:33.655 "r_mbytes_per_sec": 0, 00:10:33.655 "w_mbytes_per_sec": 0 00:10:33.655 }, 00:10:33.655 "claimed": true, 00:10:33.655 "claim_type": "exclusive_write", 00:10:33.655 "zoned": false, 00:10:33.655 "supported_io_types": { 00:10:33.655 "read": true, 00:10:33.655 "write": true, 00:10:33.655 "unmap": true, 00:10:33.655 "flush": true, 00:10:33.655 "reset": true, 00:10:33.655 "nvme_admin": false, 00:10:33.655 "nvme_io": false, 00:10:33.655 "nvme_io_md": false, 00:10:33.655 "write_zeroes": true, 00:10:33.655 "zcopy": true, 00:10:33.655 "get_zone_info": false, 00:10:33.655 "zone_management": false, 00:10:33.655 "zone_append": false, 00:10:33.655 "compare": false, 00:10:33.655 "compare_and_write": false, 00:10:33.655 "abort": true, 00:10:33.655 "seek_hole": false, 00:10:33.655 "seek_data": false, 00:10:33.655 "copy": true, 00:10:33.655 "nvme_iov_md": false 00:10:33.655 }, 00:10:33.655 "memory_domains": [ 00:10:33.655 { 00:10:33.655 "dma_device_id": "system", 00:10:33.655 "dma_device_type": 1 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.655 "dma_device_type": 2 00:10:33.655 } 00:10:33.655 ], 00:10:33.655 "driver_specific": {} 00:10:33.655 } 00:10:33.655 ] 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.655 "name": "Existed_Raid", 00:10:33.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.655 "strip_size_kb": 64, 00:10:33.655 "state": "configuring", 00:10:33.655 "raid_level": "concat", 00:10:33.655 "superblock": false, 00:10:33.655 "num_base_bdevs": 4, 00:10:33.655 "num_base_bdevs_discovered": 3, 00:10:33.655 "num_base_bdevs_operational": 4, 00:10:33.655 "base_bdevs_list": [ 00:10:33.655 { 00:10:33.655 "name": "BaseBdev1", 00:10:33.655 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:33.655 "is_configured": true, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 65536 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "name": "BaseBdev2", 00:10:33.655 "uuid": "27b72bc7-2c25-48de-bb41-93b65da17f09", 00:10:33.655 "is_configured": true, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 65536 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "name": "BaseBdev3", 00:10:33.655 "uuid": "d2d6a45b-a585-4584-a293-1ffecf6b693e", 00:10:33.655 "is_configured": true, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 65536 00:10:33.655 }, 00:10:33.655 { 00:10:33.655 "name": "BaseBdev4", 00:10:33.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.655 "is_configured": false, 00:10:33.655 "data_offset": 0, 00:10:33.655 "data_size": 0 00:10:33.655 } 00:10:33.655 ] 00:10:33.655 }' 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.655 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.914 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.914 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.174 [2024-11-18 10:38:59.802825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.174 [2024-11-18 10:38:59.802876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.174 [2024-11-18 10:38:59.802885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:34.174 [2024-11-18 10:38:59.803243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:34.174 [2024-11-18 10:38:59.803424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.174 [2024-11-18 10:38:59.803444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.174 [2024-11-18 10:38:59.803717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.174 BaseBdev4 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.174 [ 00:10:34.174 { 00:10:34.174 "name": "BaseBdev4", 00:10:34.174 "aliases": [ 00:10:34.174 "3805b356-c6eb-462f-8ed8-389248a1f16b" 00:10:34.174 ], 00:10:34.174 "product_name": "Malloc disk", 00:10:34.174 "block_size": 512, 00:10:34.174 "num_blocks": 65536, 00:10:34.174 "uuid": "3805b356-c6eb-462f-8ed8-389248a1f16b", 00:10:34.174 "assigned_rate_limits": { 00:10:34.174 "rw_ios_per_sec": 0, 00:10:34.174 "rw_mbytes_per_sec": 0, 00:10:34.174 "r_mbytes_per_sec": 0, 00:10:34.174 "w_mbytes_per_sec": 0 00:10:34.174 }, 00:10:34.174 "claimed": true, 00:10:34.174 "claim_type": "exclusive_write", 00:10:34.174 "zoned": false, 00:10:34.174 "supported_io_types": { 00:10:34.174 "read": true, 00:10:34.174 "write": true, 00:10:34.174 "unmap": true, 00:10:34.174 "flush": true, 00:10:34.174 "reset": true, 00:10:34.174 "nvme_admin": false, 00:10:34.174 "nvme_io": false, 00:10:34.174 "nvme_io_md": false, 00:10:34.174 "write_zeroes": true, 00:10:34.174 "zcopy": true, 00:10:34.174 "get_zone_info": false, 00:10:34.174 "zone_management": false, 00:10:34.174 "zone_append": false, 00:10:34.174 "compare": false, 00:10:34.174 "compare_and_write": false, 00:10:34.174 "abort": true, 00:10:34.174 "seek_hole": false, 00:10:34.174 "seek_data": false, 00:10:34.174 "copy": true, 00:10:34.174 "nvme_iov_md": false 00:10:34.174 }, 00:10:34.174 "memory_domains": [ 00:10:34.174 { 00:10:34.174 "dma_device_id": "system", 00:10:34.174 "dma_device_type": 1 00:10:34.174 }, 00:10:34.174 { 00:10:34.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.174 "dma_device_type": 2 00:10:34.174 } 00:10:34.174 ], 00:10:34.174 "driver_specific": {} 00:10:34.174 } 00:10:34.174 ] 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.174 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.175 "name": "Existed_Raid", 00:10:34.175 "uuid": "a3b1951e-797a-41cb-81c1-5f47448ef601", 00:10:34.175 "strip_size_kb": 64, 00:10:34.175 "state": "online", 00:10:34.175 "raid_level": "concat", 00:10:34.175 "superblock": false, 00:10:34.175 "num_base_bdevs": 4, 00:10:34.175 "num_base_bdevs_discovered": 4, 00:10:34.175 "num_base_bdevs_operational": 4, 00:10:34.175 "base_bdevs_list": [ 00:10:34.175 { 00:10:34.175 "name": "BaseBdev1", 00:10:34.175 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 0, 00:10:34.175 "data_size": 65536 00:10:34.175 }, 00:10:34.175 { 00:10:34.175 "name": "BaseBdev2", 00:10:34.175 "uuid": "27b72bc7-2c25-48de-bb41-93b65da17f09", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 0, 00:10:34.175 "data_size": 65536 00:10:34.175 }, 00:10:34.175 { 00:10:34.175 "name": "BaseBdev3", 00:10:34.175 "uuid": "d2d6a45b-a585-4584-a293-1ffecf6b693e", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 0, 00:10:34.175 "data_size": 65536 00:10:34.175 }, 00:10:34.175 { 00:10:34.175 "name": "BaseBdev4", 00:10:34.175 "uuid": "3805b356-c6eb-462f-8ed8-389248a1f16b", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 0, 00:10:34.175 "data_size": 65536 00:10:34.175 } 00:10:34.175 ] 00:10:34.175 }' 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.175 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.434 [2024-11-18 10:39:00.242443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.434 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.434 "name": "Existed_Raid", 00:10:34.434 "aliases": [ 00:10:34.434 "a3b1951e-797a-41cb-81c1-5f47448ef601" 00:10:34.434 ], 00:10:34.434 "product_name": "Raid Volume", 00:10:34.434 "block_size": 512, 00:10:34.434 "num_blocks": 262144, 00:10:34.434 "uuid": "a3b1951e-797a-41cb-81c1-5f47448ef601", 00:10:34.434 "assigned_rate_limits": { 00:10:34.434 "rw_ios_per_sec": 0, 00:10:34.434 "rw_mbytes_per_sec": 0, 00:10:34.434 "r_mbytes_per_sec": 0, 00:10:34.435 "w_mbytes_per_sec": 0 00:10:34.435 }, 00:10:34.435 "claimed": false, 00:10:34.435 "zoned": false, 00:10:34.435 "supported_io_types": { 00:10:34.435 "read": true, 00:10:34.435 "write": true, 00:10:34.435 "unmap": true, 00:10:34.435 "flush": true, 00:10:34.435 "reset": true, 00:10:34.435 "nvme_admin": false, 00:10:34.435 "nvme_io": false, 00:10:34.435 "nvme_io_md": false, 00:10:34.435 "write_zeroes": true, 00:10:34.435 "zcopy": false, 00:10:34.435 "get_zone_info": false, 00:10:34.435 "zone_management": false, 00:10:34.435 "zone_append": false, 00:10:34.435 "compare": false, 00:10:34.435 "compare_and_write": false, 00:10:34.435 "abort": false, 00:10:34.435 "seek_hole": false, 00:10:34.435 "seek_data": false, 00:10:34.435 "copy": false, 00:10:34.435 "nvme_iov_md": false 00:10:34.435 }, 00:10:34.435 "memory_domains": [ 00:10:34.435 { 00:10:34.435 "dma_device_id": "system", 00:10:34.435 "dma_device_type": 1 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.435 "dma_device_type": 2 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "system", 00:10:34.435 "dma_device_type": 1 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.435 "dma_device_type": 2 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "system", 00:10:34.435 "dma_device_type": 1 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.435 "dma_device_type": 2 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "system", 00:10:34.435 "dma_device_type": 1 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.435 "dma_device_type": 2 00:10:34.435 } 00:10:34.435 ], 00:10:34.435 "driver_specific": { 00:10:34.435 "raid": { 00:10:34.435 "uuid": "a3b1951e-797a-41cb-81c1-5f47448ef601", 00:10:34.435 "strip_size_kb": 64, 00:10:34.435 "state": "online", 00:10:34.435 "raid_level": "concat", 00:10:34.435 "superblock": false, 00:10:34.435 "num_base_bdevs": 4, 00:10:34.435 "num_base_bdevs_discovered": 4, 00:10:34.435 "num_base_bdevs_operational": 4, 00:10:34.435 "base_bdevs_list": [ 00:10:34.435 { 00:10:34.435 "name": "BaseBdev1", 00:10:34.435 "uuid": "5aab412d-3f55-4304-abc4-528593938268", 00:10:34.435 "is_configured": true, 00:10:34.435 "data_offset": 0, 00:10:34.435 "data_size": 65536 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "name": "BaseBdev2", 00:10:34.435 "uuid": "27b72bc7-2c25-48de-bb41-93b65da17f09", 00:10:34.435 "is_configured": true, 00:10:34.435 "data_offset": 0, 00:10:34.435 "data_size": 65536 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "name": "BaseBdev3", 00:10:34.435 "uuid": "d2d6a45b-a585-4584-a293-1ffecf6b693e", 00:10:34.435 "is_configured": true, 00:10:34.435 "data_offset": 0, 00:10:34.435 "data_size": 65536 00:10:34.435 }, 00:10:34.435 { 00:10:34.435 "name": "BaseBdev4", 00:10:34.435 "uuid": "3805b356-c6eb-462f-8ed8-389248a1f16b", 00:10:34.435 "is_configured": true, 00:10:34.435 "data_offset": 0, 00:10:34.435 "data_size": 65536 00:10:34.435 } 00:10:34.435 ] 00:10:34.435 } 00:10:34.435 } 00:10:34.435 }' 00:10:34.435 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.694 BaseBdev2 00:10:34.694 BaseBdev3 00:10:34.694 BaseBdev4' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.694 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.695 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.955 [2024-11-18 10:39:00.589564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.955 [2024-11-18 10:39:00.589593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.955 [2024-11-18 10:39:00.589641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.955 "name": "Existed_Raid", 00:10:34.955 "uuid": "a3b1951e-797a-41cb-81c1-5f47448ef601", 00:10:34.955 "strip_size_kb": 64, 00:10:34.955 "state": "offline", 00:10:34.955 "raid_level": "concat", 00:10:34.955 "superblock": false, 00:10:34.955 "num_base_bdevs": 4, 00:10:34.955 "num_base_bdevs_discovered": 3, 00:10:34.955 "num_base_bdevs_operational": 3, 00:10:34.955 "base_bdevs_list": [ 00:10:34.955 { 00:10:34.955 "name": null, 00:10:34.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.955 "is_configured": false, 00:10:34.955 "data_offset": 0, 00:10:34.955 "data_size": 65536 00:10:34.955 }, 00:10:34.955 { 00:10:34.955 "name": "BaseBdev2", 00:10:34.955 "uuid": "27b72bc7-2c25-48de-bb41-93b65da17f09", 00:10:34.955 "is_configured": true, 00:10:34.955 "data_offset": 0, 00:10:34.955 "data_size": 65536 00:10:34.955 }, 00:10:34.955 { 00:10:34.955 "name": "BaseBdev3", 00:10:34.955 "uuid": "d2d6a45b-a585-4584-a293-1ffecf6b693e", 00:10:34.955 "is_configured": true, 00:10:34.955 "data_offset": 0, 00:10:34.955 "data_size": 65536 00:10:34.955 }, 00:10:34.955 { 00:10:34.955 "name": "BaseBdev4", 00:10:34.955 "uuid": "3805b356-c6eb-462f-8ed8-389248a1f16b", 00:10:34.955 "is_configured": true, 00:10:34.955 "data_offset": 0, 00:10:34.955 "data_size": 65536 00:10:34.955 } 00:10:34.955 ] 00:10:34.955 }' 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.955 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.571 [2024-11-18 10:39:01.187092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.571 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.572 [2024-11-18 10:39:01.341653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.572 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.831 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.831 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.831 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.831 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.832 [2024-11-18 10:39:01.495840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:35.832 [2024-11-18 10:39:01.495903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.832 BaseBdev2 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.832 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.092 [ 00:10:36.092 { 00:10:36.092 "name": "BaseBdev2", 00:10:36.092 "aliases": [ 00:10:36.092 "d67f5b98-41e6-47be-a25d-a3de0091edba" 00:10:36.092 ], 00:10:36.092 "product_name": "Malloc disk", 00:10:36.092 "block_size": 512, 00:10:36.092 "num_blocks": 65536, 00:10:36.092 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:36.092 "assigned_rate_limits": { 00:10:36.092 "rw_ios_per_sec": 0, 00:10:36.092 "rw_mbytes_per_sec": 0, 00:10:36.092 "r_mbytes_per_sec": 0, 00:10:36.092 "w_mbytes_per_sec": 0 00:10:36.092 }, 00:10:36.092 "claimed": false, 00:10:36.092 "zoned": false, 00:10:36.092 "supported_io_types": { 00:10:36.092 "read": true, 00:10:36.092 "write": true, 00:10:36.092 "unmap": true, 00:10:36.092 "flush": true, 00:10:36.092 "reset": true, 00:10:36.092 "nvme_admin": false, 00:10:36.092 "nvme_io": false, 00:10:36.092 "nvme_io_md": false, 00:10:36.092 "write_zeroes": true, 00:10:36.092 "zcopy": true, 00:10:36.092 "get_zone_info": false, 00:10:36.092 "zone_management": false, 00:10:36.092 "zone_append": false, 00:10:36.092 "compare": false, 00:10:36.092 "compare_and_write": false, 00:10:36.092 "abort": true, 00:10:36.092 "seek_hole": false, 00:10:36.092 "seek_data": false, 00:10:36.092 "copy": true, 00:10:36.092 "nvme_iov_md": false 00:10:36.092 }, 00:10:36.092 "memory_domains": [ 00:10:36.092 { 00:10:36.092 "dma_device_id": "system", 00:10:36.092 "dma_device_type": 1 00:10:36.092 }, 00:10:36.092 { 00:10:36.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.092 "dma_device_type": 2 00:10:36.092 } 00:10:36.092 ], 00:10:36.092 "driver_specific": {} 00:10:36.092 } 00:10:36.092 ] 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.092 BaseBdev3 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.092 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.092 [ 00:10:36.092 { 00:10:36.092 "name": "BaseBdev3", 00:10:36.092 "aliases": [ 00:10:36.092 "bf8091bb-be30-4405-a60b-3e4b6da01392" 00:10:36.092 ], 00:10:36.092 "product_name": "Malloc disk", 00:10:36.092 "block_size": 512, 00:10:36.092 "num_blocks": 65536, 00:10:36.092 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:36.092 "assigned_rate_limits": { 00:10:36.092 "rw_ios_per_sec": 0, 00:10:36.092 "rw_mbytes_per_sec": 0, 00:10:36.093 "r_mbytes_per_sec": 0, 00:10:36.093 "w_mbytes_per_sec": 0 00:10:36.093 }, 00:10:36.093 "claimed": false, 00:10:36.093 "zoned": false, 00:10:36.093 "supported_io_types": { 00:10:36.093 "read": true, 00:10:36.093 "write": true, 00:10:36.093 "unmap": true, 00:10:36.093 "flush": true, 00:10:36.093 "reset": true, 00:10:36.093 "nvme_admin": false, 00:10:36.093 "nvme_io": false, 00:10:36.093 "nvme_io_md": false, 00:10:36.093 "write_zeroes": true, 00:10:36.093 "zcopy": true, 00:10:36.093 "get_zone_info": false, 00:10:36.093 "zone_management": false, 00:10:36.093 "zone_append": false, 00:10:36.093 "compare": false, 00:10:36.093 "compare_and_write": false, 00:10:36.093 "abort": true, 00:10:36.093 "seek_hole": false, 00:10:36.093 "seek_data": false, 00:10:36.093 "copy": true, 00:10:36.093 "nvme_iov_md": false 00:10:36.093 }, 00:10:36.093 "memory_domains": [ 00:10:36.093 { 00:10:36.093 "dma_device_id": "system", 00:10:36.093 "dma_device_type": 1 00:10:36.093 }, 00:10:36.093 { 00:10:36.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.093 "dma_device_type": 2 00:10:36.093 } 00:10:36.093 ], 00:10:36.093 "driver_specific": {} 00:10:36.093 } 00:10:36.093 ] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 BaseBdev4 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 [ 00:10:36.093 { 00:10:36.093 "name": "BaseBdev4", 00:10:36.093 "aliases": [ 00:10:36.093 "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7" 00:10:36.093 ], 00:10:36.093 "product_name": "Malloc disk", 00:10:36.093 "block_size": 512, 00:10:36.093 "num_blocks": 65536, 00:10:36.093 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:36.093 "assigned_rate_limits": { 00:10:36.093 "rw_ios_per_sec": 0, 00:10:36.093 "rw_mbytes_per_sec": 0, 00:10:36.093 "r_mbytes_per_sec": 0, 00:10:36.093 "w_mbytes_per_sec": 0 00:10:36.093 }, 00:10:36.093 "claimed": false, 00:10:36.093 "zoned": false, 00:10:36.093 "supported_io_types": { 00:10:36.093 "read": true, 00:10:36.093 "write": true, 00:10:36.093 "unmap": true, 00:10:36.093 "flush": true, 00:10:36.093 "reset": true, 00:10:36.093 "nvme_admin": false, 00:10:36.093 "nvme_io": false, 00:10:36.093 "nvme_io_md": false, 00:10:36.093 "write_zeroes": true, 00:10:36.093 "zcopy": true, 00:10:36.093 "get_zone_info": false, 00:10:36.093 "zone_management": false, 00:10:36.093 "zone_append": false, 00:10:36.093 "compare": false, 00:10:36.093 "compare_and_write": false, 00:10:36.093 "abort": true, 00:10:36.093 "seek_hole": false, 00:10:36.093 "seek_data": false, 00:10:36.093 "copy": true, 00:10:36.093 "nvme_iov_md": false 00:10:36.093 }, 00:10:36.093 "memory_domains": [ 00:10:36.093 { 00:10:36.093 "dma_device_id": "system", 00:10:36.093 "dma_device_type": 1 00:10:36.093 }, 00:10:36.093 { 00:10:36.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.093 "dma_device_type": 2 00:10:36.093 } 00:10:36.093 ], 00:10:36.093 "driver_specific": {} 00:10:36.093 } 00:10:36.093 ] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 [2024-11-18 10:39:01.905780] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.093 [2024-11-18 10:39:01.905910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.093 [2024-11-18 10:39:01.905953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.093 [2024-11-18 10:39:01.907999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.093 [2024-11-18 10:39:01.908091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.093 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.093 "name": "Existed_Raid", 00:10:36.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.093 "strip_size_kb": 64, 00:10:36.093 "state": "configuring", 00:10:36.094 "raid_level": "concat", 00:10:36.094 "superblock": false, 00:10:36.094 "num_base_bdevs": 4, 00:10:36.094 "num_base_bdevs_discovered": 3, 00:10:36.094 "num_base_bdevs_operational": 4, 00:10:36.094 "base_bdevs_list": [ 00:10:36.094 { 00:10:36.094 "name": "BaseBdev1", 00:10:36.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.094 "is_configured": false, 00:10:36.094 "data_offset": 0, 00:10:36.094 "data_size": 0 00:10:36.094 }, 00:10:36.094 { 00:10:36.094 "name": "BaseBdev2", 00:10:36.094 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:36.094 "is_configured": true, 00:10:36.094 "data_offset": 0, 00:10:36.094 "data_size": 65536 00:10:36.094 }, 00:10:36.094 { 00:10:36.094 "name": "BaseBdev3", 00:10:36.094 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:36.094 "is_configured": true, 00:10:36.094 "data_offset": 0, 00:10:36.094 "data_size": 65536 00:10:36.094 }, 00:10:36.094 { 00:10:36.094 "name": "BaseBdev4", 00:10:36.094 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:36.094 "is_configured": true, 00:10:36.094 "data_offset": 0, 00:10:36.094 "data_size": 65536 00:10:36.094 } 00:10:36.094 ] 00:10:36.094 }' 00:10:36.094 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.094 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.662 [2024-11-18 10:39:02.297095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.662 "name": "Existed_Raid", 00:10:36.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.662 "strip_size_kb": 64, 00:10:36.662 "state": "configuring", 00:10:36.662 "raid_level": "concat", 00:10:36.662 "superblock": false, 00:10:36.662 "num_base_bdevs": 4, 00:10:36.662 "num_base_bdevs_discovered": 2, 00:10:36.662 "num_base_bdevs_operational": 4, 00:10:36.662 "base_bdevs_list": [ 00:10:36.662 { 00:10:36.662 "name": "BaseBdev1", 00:10:36.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.662 "is_configured": false, 00:10:36.662 "data_offset": 0, 00:10:36.662 "data_size": 0 00:10:36.662 }, 00:10:36.662 { 00:10:36.662 "name": null, 00:10:36.662 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:36.662 "is_configured": false, 00:10:36.662 "data_offset": 0, 00:10:36.662 "data_size": 65536 00:10:36.662 }, 00:10:36.662 { 00:10:36.662 "name": "BaseBdev3", 00:10:36.662 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:36.662 "is_configured": true, 00:10:36.662 "data_offset": 0, 00:10:36.662 "data_size": 65536 00:10:36.662 }, 00:10:36.662 { 00:10:36.662 "name": "BaseBdev4", 00:10:36.662 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:36.662 "is_configured": true, 00:10:36.662 "data_offset": 0, 00:10:36.662 "data_size": 65536 00:10:36.662 } 00:10:36.662 ] 00:10:36.662 }' 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.662 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.921 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.181 [2024-11-18 10:39:02.825651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.181 BaseBdev1 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.181 [ 00:10:37.181 { 00:10:37.181 "name": "BaseBdev1", 00:10:37.181 "aliases": [ 00:10:37.181 "af123aea-73eb-48ea-937c-74cc77e01e3e" 00:10:37.181 ], 00:10:37.181 "product_name": "Malloc disk", 00:10:37.181 "block_size": 512, 00:10:37.181 "num_blocks": 65536, 00:10:37.181 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:37.181 "assigned_rate_limits": { 00:10:37.181 "rw_ios_per_sec": 0, 00:10:37.181 "rw_mbytes_per_sec": 0, 00:10:37.181 "r_mbytes_per_sec": 0, 00:10:37.181 "w_mbytes_per_sec": 0 00:10:37.181 }, 00:10:37.181 "claimed": true, 00:10:37.181 "claim_type": "exclusive_write", 00:10:37.181 "zoned": false, 00:10:37.181 "supported_io_types": { 00:10:37.181 "read": true, 00:10:37.181 "write": true, 00:10:37.181 "unmap": true, 00:10:37.181 "flush": true, 00:10:37.181 "reset": true, 00:10:37.181 "nvme_admin": false, 00:10:37.181 "nvme_io": false, 00:10:37.181 "nvme_io_md": false, 00:10:37.181 "write_zeroes": true, 00:10:37.181 "zcopy": true, 00:10:37.181 "get_zone_info": false, 00:10:37.181 "zone_management": false, 00:10:37.181 "zone_append": false, 00:10:37.181 "compare": false, 00:10:37.181 "compare_and_write": false, 00:10:37.181 "abort": true, 00:10:37.181 "seek_hole": false, 00:10:37.181 "seek_data": false, 00:10:37.181 "copy": true, 00:10:37.181 "nvme_iov_md": false 00:10:37.181 }, 00:10:37.181 "memory_domains": [ 00:10:37.181 { 00:10:37.181 "dma_device_id": "system", 00:10:37.181 "dma_device_type": 1 00:10:37.181 }, 00:10:37.181 { 00:10:37.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.181 "dma_device_type": 2 00:10:37.181 } 00:10:37.181 ], 00:10:37.181 "driver_specific": {} 00:10:37.181 } 00:10:37.181 ] 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.181 "name": "Existed_Raid", 00:10:37.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.181 "strip_size_kb": 64, 00:10:37.181 "state": "configuring", 00:10:37.181 "raid_level": "concat", 00:10:37.181 "superblock": false, 00:10:37.181 "num_base_bdevs": 4, 00:10:37.181 "num_base_bdevs_discovered": 3, 00:10:37.181 "num_base_bdevs_operational": 4, 00:10:37.181 "base_bdevs_list": [ 00:10:37.181 { 00:10:37.181 "name": "BaseBdev1", 00:10:37.181 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:37.181 "is_configured": true, 00:10:37.181 "data_offset": 0, 00:10:37.181 "data_size": 65536 00:10:37.181 }, 00:10:37.181 { 00:10:37.181 "name": null, 00:10:37.181 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:37.181 "is_configured": false, 00:10:37.181 "data_offset": 0, 00:10:37.181 "data_size": 65536 00:10:37.181 }, 00:10:37.181 { 00:10:37.181 "name": "BaseBdev3", 00:10:37.181 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:37.181 "is_configured": true, 00:10:37.181 "data_offset": 0, 00:10:37.181 "data_size": 65536 00:10:37.181 }, 00:10:37.181 { 00:10:37.181 "name": "BaseBdev4", 00:10:37.181 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:37.181 "is_configured": true, 00:10:37.181 "data_offset": 0, 00:10:37.181 "data_size": 65536 00:10:37.181 } 00:10:37.181 ] 00:10:37.181 }' 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.181 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.439 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.439 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.439 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.439 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.697 [2024-11-18 10:39:03.364777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.697 "name": "Existed_Raid", 00:10:37.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.697 "strip_size_kb": 64, 00:10:37.697 "state": "configuring", 00:10:37.697 "raid_level": "concat", 00:10:37.697 "superblock": false, 00:10:37.697 "num_base_bdevs": 4, 00:10:37.697 "num_base_bdevs_discovered": 2, 00:10:37.697 "num_base_bdevs_operational": 4, 00:10:37.697 "base_bdevs_list": [ 00:10:37.697 { 00:10:37.697 "name": "BaseBdev1", 00:10:37.697 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:37.697 "is_configured": true, 00:10:37.697 "data_offset": 0, 00:10:37.697 "data_size": 65536 00:10:37.697 }, 00:10:37.697 { 00:10:37.697 "name": null, 00:10:37.697 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:37.697 "is_configured": false, 00:10:37.697 "data_offset": 0, 00:10:37.697 "data_size": 65536 00:10:37.697 }, 00:10:37.697 { 00:10:37.697 "name": null, 00:10:37.697 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:37.697 "is_configured": false, 00:10:37.697 "data_offset": 0, 00:10:37.697 "data_size": 65536 00:10:37.697 }, 00:10:37.697 { 00:10:37.697 "name": "BaseBdev4", 00:10:37.697 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:37.697 "is_configured": true, 00:10:37.697 "data_offset": 0, 00:10:37.697 "data_size": 65536 00:10:37.697 } 00:10:37.697 ] 00:10:37.697 }' 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.697 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.955 [2024-11-18 10:39:03.808020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.955 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.332 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.332 "name": "Existed_Raid", 00:10:38.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.332 "strip_size_kb": 64, 00:10:38.332 "state": "configuring", 00:10:38.332 "raid_level": "concat", 00:10:38.332 "superblock": false, 00:10:38.332 "num_base_bdevs": 4, 00:10:38.332 "num_base_bdevs_discovered": 3, 00:10:38.332 "num_base_bdevs_operational": 4, 00:10:38.332 "base_bdevs_list": [ 00:10:38.332 { 00:10:38.332 "name": "BaseBdev1", 00:10:38.332 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:38.332 "is_configured": true, 00:10:38.332 "data_offset": 0, 00:10:38.332 "data_size": 65536 00:10:38.332 }, 00:10:38.332 { 00:10:38.332 "name": null, 00:10:38.332 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:38.332 "is_configured": false, 00:10:38.332 "data_offset": 0, 00:10:38.332 "data_size": 65536 00:10:38.332 }, 00:10:38.332 { 00:10:38.332 "name": "BaseBdev3", 00:10:38.332 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:38.332 "is_configured": true, 00:10:38.332 "data_offset": 0, 00:10:38.332 "data_size": 65536 00:10:38.332 }, 00:10:38.332 { 00:10:38.332 "name": "BaseBdev4", 00:10:38.332 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:38.332 "is_configured": true, 00:10:38.332 "data_offset": 0, 00:10:38.332 "data_size": 65536 00:10:38.332 } 00:10:38.332 ] 00:10:38.332 }' 00:10:38.332 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.332 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.592 [2024-11-18 10:39:04.307220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.592 "name": "Existed_Raid", 00:10:38.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.592 "strip_size_kb": 64, 00:10:38.592 "state": "configuring", 00:10:38.592 "raid_level": "concat", 00:10:38.592 "superblock": false, 00:10:38.592 "num_base_bdevs": 4, 00:10:38.592 "num_base_bdevs_discovered": 2, 00:10:38.592 "num_base_bdevs_operational": 4, 00:10:38.592 "base_bdevs_list": [ 00:10:38.592 { 00:10:38.592 "name": null, 00:10:38.592 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:38.592 "is_configured": false, 00:10:38.592 "data_offset": 0, 00:10:38.592 "data_size": 65536 00:10:38.592 }, 00:10:38.592 { 00:10:38.592 "name": null, 00:10:38.592 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:38.592 "is_configured": false, 00:10:38.592 "data_offset": 0, 00:10:38.593 "data_size": 65536 00:10:38.593 }, 00:10:38.593 { 00:10:38.593 "name": "BaseBdev3", 00:10:38.593 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:38.593 "is_configured": true, 00:10:38.593 "data_offset": 0, 00:10:38.593 "data_size": 65536 00:10:38.593 }, 00:10:38.593 { 00:10:38.593 "name": "BaseBdev4", 00:10:38.593 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:38.593 "is_configured": true, 00:10:38.593 "data_offset": 0, 00:10:38.593 "data_size": 65536 00:10:38.593 } 00:10:38.593 ] 00:10:38.593 }' 00:10:38.593 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.593 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.161 [2024-11-18 10:39:04.880222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.161 "name": "Existed_Raid", 00:10:39.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.161 "strip_size_kb": 64, 00:10:39.161 "state": "configuring", 00:10:39.161 "raid_level": "concat", 00:10:39.161 "superblock": false, 00:10:39.161 "num_base_bdevs": 4, 00:10:39.161 "num_base_bdevs_discovered": 3, 00:10:39.161 "num_base_bdevs_operational": 4, 00:10:39.161 "base_bdevs_list": [ 00:10:39.161 { 00:10:39.161 "name": null, 00:10:39.161 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:39.161 "is_configured": false, 00:10:39.161 "data_offset": 0, 00:10:39.161 "data_size": 65536 00:10:39.161 }, 00:10:39.161 { 00:10:39.161 "name": "BaseBdev2", 00:10:39.161 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:39.161 "is_configured": true, 00:10:39.161 "data_offset": 0, 00:10:39.161 "data_size": 65536 00:10:39.161 }, 00:10:39.161 { 00:10:39.161 "name": "BaseBdev3", 00:10:39.161 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:39.161 "is_configured": true, 00:10:39.161 "data_offset": 0, 00:10:39.161 "data_size": 65536 00:10:39.161 }, 00:10:39.161 { 00:10:39.161 "name": "BaseBdev4", 00:10:39.161 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:39.161 "is_configured": true, 00:10:39.161 "data_offset": 0, 00:10:39.161 "data_size": 65536 00:10:39.161 } 00:10:39.161 ] 00:10:39.161 }' 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.161 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.421 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.421 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.421 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.421 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u af123aea-73eb-48ea-937c-74cc77e01e3e 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.681 [2024-11-18 10:39:05.443043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.681 [2024-11-18 10:39:05.443148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.681 [2024-11-18 10:39:05.443161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:39.681 [2024-11-18 10:39:05.443503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:39.681 [2024-11-18 10:39:05.443665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.681 [2024-11-18 10:39:05.443678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:39.681 [2024-11-18 10:39:05.443953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.681 NewBaseBdev 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.681 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.682 [ 00:10:39.682 { 00:10:39.682 "name": "NewBaseBdev", 00:10:39.682 "aliases": [ 00:10:39.682 "af123aea-73eb-48ea-937c-74cc77e01e3e" 00:10:39.682 ], 00:10:39.682 "product_name": "Malloc disk", 00:10:39.682 "block_size": 512, 00:10:39.682 "num_blocks": 65536, 00:10:39.682 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:39.682 "assigned_rate_limits": { 00:10:39.682 "rw_ios_per_sec": 0, 00:10:39.682 "rw_mbytes_per_sec": 0, 00:10:39.682 "r_mbytes_per_sec": 0, 00:10:39.682 "w_mbytes_per_sec": 0 00:10:39.682 }, 00:10:39.682 "claimed": true, 00:10:39.682 "claim_type": "exclusive_write", 00:10:39.682 "zoned": false, 00:10:39.682 "supported_io_types": { 00:10:39.682 "read": true, 00:10:39.682 "write": true, 00:10:39.682 "unmap": true, 00:10:39.682 "flush": true, 00:10:39.682 "reset": true, 00:10:39.682 "nvme_admin": false, 00:10:39.682 "nvme_io": false, 00:10:39.682 "nvme_io_md": false, 00:10:39.682 "write_zeroes": true, 00:10:39.682 "zcopy": true, 00:10:39.682 "get_zone_info": false, 00:10:39.682 "zone_management": false, 00:10:39.682 "zone_append": false, 00:10:39.682 "compare": false, 00:10:39.682 "compare_and_write": false, 00:10:39.682 "abort": true, 00:10:39.682 "seek_hole": false, 00:10:39.682 "seek_data": false, 00:10:39.682 "copy": true, 00:10:39.682 "nvme_iov_md": false 00:10:39.682 }, 00:10:39.682 "memory_domains": [ 00:10:39.682 { 00:10:39.682 "dma_device_id": "system", 00:10:39.682 "dma_device_type": 1 00:10:39.682 }, 00:10:39.682 { 00:10:39.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.682 "dma_device_type": 2 00:10:39.682 } 00:10:39.682 ], 00:10:39.682 "driver_specific": {} 00:10:39.682 } 00:10:39.682 ] 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.682 "name": "Existed_Raid", 00:10:39.682 "uuid": "32bd4a5e-b538-451a-a13c-70714bace791", 00:10:39.682 "strip_size_kb": 64, 00:10:39.682 "state": "online", 00:10:39.682 "raid_level": "concat", 00:10:39.682 "superblock": false, 00:10:39.682 "num_base_bdevs": 4, 00:10:39.682 "num_base_bdevs_discovered": 4, 00:10:39.682 "num_base_bdevs_operational": 4, 00:10:39.682 "base_bdevs_list": [ 00:10:39.682 { 00:10:39.682 "name": "NewBaseBdev", 00:10:39.682 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:39.682 "is_configured": true, 00:10:39.682 "data_offset": 0, 00:10:39.682 "data_size": 65536 00:10:39.682 }, 00:10:39.682 { 00:10:39.682 "name": "BaseBdev2", 00:10:39.682 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:39.682 "is_configured": true, 00:10:39.682 "data_offset": 0, 00:10:39.682 "data_size": 65536 00:10:39.682 }, 00:10:39.682 { 00:10:39.682 "name": "BaseBdev3", 00:10:39.682 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:39.682 "is_configured": true, 00:10:39.682 "data_offset": 0, 00:10:39.682 "data_size": 65536 00:10:39.682 }, 00:10:39.682 { 00:10:39.682 "name": "BaseBdev4", 00:10:39.682 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:39.682 "is_configured": true, 00:10:39.682 "data_offset": 0, 00:10:39.682 "data_size": 65536 00:10:39.682 } 00:10:39.682 ] 00:10:39.682 }' 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.682 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.250 [2024-11-18 10:39:05.950576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.250 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.250 "name": "Existed_Raid", 00:10:40.250 "aliases": [ 00:10:40.250 "32bd4a5e-b538-451a-a13c-70714bace791" 00:10:40.250 ], 00:10:40.250 "product_name": "Raid Volume", 00:10:40.250 "block_size": 512, 00:10:40.250 "num_blocks": 262144, 00:10:40.250 "uuid": "32bd4a5e-b538-451a-a13c-70714bace791", 00:10:40.250 "assigned_rate_limits": { 00:10:40.250 "rw_ios_per_sec": 0, 00:10:40.250 "rw_mbytes_per_sec": 0, 00:10:40.250 "r_mbytes_per_sec": 0, 00:10:40.250 "w_mbytes_per_sec": 0 00:10:40.250 }, 00:10:40.251 "claimed": false, 00:10:40.251 "zoned": false, 00:10:40.251 "supported_io_types": { 00:10:40.251 "read": true, 00:10:40.251 "write": true, 00:10:40.251 "unmap": true, 00:10:40.251 "flush": true, 00:10:40.251 "reset": true, 00:10:40.251 "nvme_admin": false, 00:10:40.251 "nvme_io": false, 00:10:40.251 "nvme_io_md": false, 00:10:40.251 "write_zeroes": true, 00:10:40.251 "zcopy": false, 00:10:40.251 "get_zone_info": false, 00:10:40.251 "zone_management": false, 00:10:40.251 "zone_append": false, 00:10:40.251 "compare": false, 00:10:40.251 "compare_and_write": false, 00:10:40.251 "abort": false, 00:10:40.251 "seek_hole": false, 00:10:40.251 "seek_data": false, 00:10:40.251 "copy": false, 00:10:40.251 "nvme_iov_md": false 00:10:40.251 }, 00:10:40.251 "memory_domains": [ 00:10:40.251 { 00:10:40.251 "dma_device_id": "system", 00:10:40.251 "dma_device_type": 1 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.251 "dma_device_type": 2 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "system", 00:10:40.251 "dma_device_type": 1 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.251 "dma_device_type": 2 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "system", 00:10:40.251 "dma_device_type": 1 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.251 "dma_device_type": 2 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "system", 00:10:40.251 "dma_device_type": 1 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.251 "dma_device_type": 2 00:10:40.251 } 00:10:40.251 ], 00:10:40.251 "driver_specific": { 00:10:40.251 "raid": { 00:10:40.251 "uuid": "32bd4a5e-b538-451a-a13c-70714bace791", 00:10:40.251 "strip_size_kb": 64, 00:10:40.251 "state": "online", 00:10:40.251 "raid_level": "concat", 00:10:40.251 "superblock": false, 00:10:40.251 "num_base_bdevs": 4, 00:10:40.251 "num_base_bdevs_discovered": 4, 00:10:40.251 "num_base_bdevs_operational": 4, 00:10:40.251 "base_bdevs_list": [ 00:10:40.251 { 00:10:40.251 "name": "NewBaseBdev", 00:10:40.251 "uuid": "af123aea-73eb-48ea-937c-74cc77e01e3e", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 0, 00:10:40.251 "data_size": 65536 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "name": "BaseBdev2", 00:10:40.251 "uuid": "d67f5b98-41e6-47be-a25d-a3de0091edba", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 0, 00:10:40.251 "data_size": 65536 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "name": "BaseBdev3", 00:10:40.251 "uuid": "bf8091bb-be30-4405-a60b-3e4b6da01392", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 0, 00:10:40.251 "data_size": 65536 00:10:40.251 }, 00:10:40.251 { 00:10:40.251 "name": "BaseBdev4", 00:10:40.251 "uuid": "b905ecd6-daa2-45ef-a8c9-7f39eaa92de7", 00:10:40.251 "is_configured": true, 00:10:40.251 "data_offset": 0, 00:10:40.251 "data_size": 65536 00:10:40.251 } 00:10:40.251 ] 00:10:40.251 } 00:10:40.251 } 00:10:40.251 }' 00:10:40.251 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:40.251 BaseBdev2 00:10:40.251 BaseBdev3 00:10:40.251 BaseBdev4' 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.251 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.511 [2024-11-18 10:39:06.257691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.511 [2024-11-18 10:39:06.257720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.511 [2024-11-18 10:39:06.257792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.511 [2024-11-18 10:39:06.257864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.511 [2024-11-18 10:39:06.257874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71136 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71136 ']' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71136 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71136 00:10:40.511 killing process with pid 71136 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71136' 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71136 00:10:40.511 [2024-11-18 10:39:06.302218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.511 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71136 00:10:41.081 [2024-11-18 10:39:06.716624] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.020 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.020 00:10:42.020 real 0m11.546s 00:10:42.020 user 0m18.135s 00:10:42.020 sys 0m2.115s 00:10:42.020 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.020 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.020 ************************************ 00:10:42.020 END TEST raid_state_function_test 00:10:42.020 ************************************ 00:10:42.280 10:39:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:42.280 10:39:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.280 10:39:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.280 10:39:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.280 ************************************ 00:10:42.280 START TEST raid_state_function_test_sb 00:10:42.280 ************************************ 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:42.280 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71808 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71808' 00:10:42.281 Process raid pid: 71808 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71808 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71808 ']' 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.281 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.281 [2024-11-18 10:39:08.060801] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:42.281 [2024-11-18 10:39:08.060977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.540 [2024-11-18 10:39:08.240822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.540 [2024-11-18 10:39:08.373567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.801 [2024-11-18 10:39:08.609750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.801 [2024-11-18 10:39:08.609789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.062 [2024-11-18 10:39:08.883587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.062 [2024-11-18 10:39:08.883644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.062 [2024-11-18 10:39:08.883655] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.062 [2024-11-18 10:39:08.883665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.062 [2024-11-18 10:39:08.883672] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.062 [2024-11-18 10:39:08.883681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.062 [2024-11-18 10:39:08.883686] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.062 [2024-11-18 10:39:08.883696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.062 "name": "Existed_Raid", 00:10:43.062 "uuid": "24b32214-1f99-4c5d-a7f0-e915637d7ec5", 00:10:43.062 "strip_size_kb": 64, 00:10:43.062 "state": "configuring", 00:10:43.062 "raid_level": "concat", 00:10:43.062 "superblock": true, 00:10:43.062 "num_base_bdevs": 4, 00:10:43.062 "num_base_bdevs_discovered": 0, 00:10:43.062 "num_base_bdevs_operational": 4, 00:10:43.062 "base_bdevs_list": [ 00:10:43.062 { 00:10:43.062 "name": "BaseBdev1", 00:10:43.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.062 "is_configured": false, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 0 00:10:43.062 }, 00:10:43.062 { 00:10:43.062 "name": "BaseBdev2", 00:10:43.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.062 "is_configured": false, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 0 00:10:43.062 }, 00:10:43.062 { 00:10:43.062 "name": "BaseBdev3", 00:10:43.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.062 "is_configured": false, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 0 00:10:43.062 }, 00:10:43.062 { 00:10:43.062 "name": "BaseBdev4", 00:10:43.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.062 "is_configured": false, 00:10:43.062 "data_offset": 0, 00:10:43.062 "data_size": 0 00:10:43.062 } 00:10:43.062 ] 00:10:43.062 }' 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.062 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 [2024-11-18 10:39:09.266839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.631 [2024-11-18 10:39:09.266949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 [2024-11-18 10:39:09.278836] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.631 [2024-11-18 10:39:09.278876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.631 [2024-11-18 10:39:09.278885] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.631 [2024-11-18 10:39:09.278895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.631 [2024-11-18 10:39:09.278906] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.631 [2024-11-18 10:39:09.278916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.631 [2024-11-18 10:39:09.278938] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.631 [2024-11-18 10:39:09.278947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 [2024-11-18 10:39:09.332924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.631 BaseBdev1 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.631 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 [ 00:10:43.631 { 00:10:43.631 "name": "BaseBdev1", 00:10:43.631 "aliases": [ 00:10:43.631 "b0df8345-f134-42b2-886f-7b50855308fe" 00:10:43.631 ], 00:10:43.631 "product_name": "Malloc disk", 00:10:43.631 "block_size": 512, 00:10:43.631 "num_blocks": 65536, 00:10:43.631 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:43.631 "assigned_rate_limits": { 00:10:43.631 "rw_ios_per_sec": 0, 00:10:43.631 "rw_mbytes_per_sec": 0, 00:10:43.631 "r_mbytes_per_sec": 0, 00:10:43.631 "w_mbytes_per_sec": 0 00:10:43.631 }, 00:10:43.632 "claimed": true, 00:10:43.632 "claim_type": "exclusive_write", 00:10:43.632 "zoned": false, 00:10:43.632 "supported_io_types": { 00:10:43.632 "read": true, 00:10:43.632 "write": true, 00:10:43.632 "unmap": true, 00:10:43.632 "flush": true, 00:10:43.632 "reset": true, 00:10:43.632 "nvme_admin": false, 00:10:43.632 "nvme_io": false, 00:10:43.632 "nvme_io_md": false, 00:10:43.632 "write_zeroes": true, 00:10:43.632 "zcopy": true, 00:10:43.632 "get_zone_info": false, 00:10:43.632 "zone_management": false, 00:10:43.632 "zone_append": false, 00:10:43.632 "compare": false, 00:10:43.632 "compare_and_write": false, 00:10:43.632 "abort": true, 00:10:43.632 "seek_hole": false, 00:10:43.632 "seek_data": false, 00:10:43.632 "copy": true, 00:10:43.632 "nvme_iov_md": false 00:10:43.632 }, 00:10:43.632 "memory_domains": [ 00:10:43.632 { 00:10:43.632 "dma_device_id": "system", 00:10:43.632 "dma_device_type": 1 00:10:43.632 }, 00:10:43.632 { 00:10:43.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.632 "dma_device_type": 2 00:10:43.632 } 00:10:43.632 ], 00:10:43.632 "driver_specific": {} 00:10:43.632 } 00:10:43.632 ] 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.632 "name": "Existed_Raid", 00:10:43.632 "uuid": "81f52c8a-8cca-4850-a6c2-c93efdce0704", 00:10:43.632 "strip_size_kb": 64, 00:10:43.632 "state": "configuring", 00:10:43.632 "raid_level": "concat", 00:10:43.632 "superblock": true, 00:10:43.632 "num_base_bdevs": 4, 00:10:43.632 "num_base_bdevs_discovered": 1, 00:10:43.632 "num_base_bdevs_operational": 4, 00:10:43.632 "base_bdevs_list": [ 00:10:43.632 { 00:10:43.632 "name": "BaseBdev1", 00:10:43.632 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:43.632 "is_configured": true, 00:10:43.632 "data_offset": 2048, 00:10:43.632 "data_size": 63488 00:10:43.632 }, 00:10:43.632 { 00:10:43.632 "name": "BaseBdev2", 00:10:43.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.632 "is_configured": false, 00:10:43.632 "data_offset": 0, 00:10:43.632 "data_size": 0 00:10:43.632 }, 00:10:43.632 { 00:10:43.632 "name": "BaseBdev3", 00:10:43.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.632 "is_configured": false, 00:10:43.632 "data_offset": 0, 00:10:43.632 "data_size": 0 00:10:43.632 }, 00:10:43.632 { 00:10:43.632 "name": "BaseBdev4", 00:10:43.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.632 "is_configured": false, 00:10:43.632 "data_offset": 0, 00:10:43.632 "data_size": 0 00:10:43.632 } 00:10:43.632 ] 00:10:43.632 }' 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.632 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.202 [2024-11-18 10:39:09.812100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.202 [2024-11-18 10:39:09.812144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.202 [2024-11-18 10:39:09.824154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.202 [2024-11-18 10:39:09.826165] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.202 [2024-11-18 10:39:09.826216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.202 [2024-11-18 10:39:09.826226] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.202 [2024-11-18 10:39:09.826236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.202 [2024-11-18 10:39:09.826243] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.202 [2024-11-18 10:39:09.826251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.202 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.203 "name": "Existed_Raid", 00:10:44.203 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:44.203 "strip_size_kb": 64, 00:10:44.203 "state": "configuring", 00:10:44.203 "raid_level": "concat", 00:10:44.203 "superblock": true, 00:10:44.203 "num_base_bdevs": 4, 00:10:44.203 "num_base_bdevs_discovered": 1, 00:10:44.203 "num_base_bdevs_operational": 4, 00:10:44.203 "base_bdevs_list": [ 00:10:44.203 { 00:10:44.203 "name": "BaseBdev1", 00:10:44.203 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:44.203 "is_configured": true, 00:10:44.203 "data_offset": 2048, 00:10:44.203 "data_size": 63488 00:10:44.203 }, 00:10:44.203 { 00:10:44.203 "name": "BaseBdev2", 00:10:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.203 "is_configured": false, 00:10:44.203 "data_offset": 0, 00:10:44.203 "data_size": 0 00:10:44.203 }, 00:10:44.203 { 00:10:44.203 "name": "BaseBdev3", 00:10:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.203 "is_configured": false, 00:10:44.203 "data_offset": 0, 00:10:44.203 "data_size": 0 00:10:44.203 }, 00:10:44.203 { 00:10:44.203 "name": "BaseBdev4", 00:10:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.203 "is_configured": false, 00:10:44.203 "data_offset": 0, 00:10:44.203 "data_size": 0 00:10:44.203 } 00:10:44.203 ] 00:10:44.203 }' 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.203 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.462 [2024-11-18 10:39:10.278372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.462 BaseBdev2 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.462 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.463 [ 00:10:44.463 { 00:10:44.463 "name": "BaseBdev2", 00:10:44.463 "aliases": [ 00:10:44.463 "3cca29bf-39e6-4e6e-a257-acd7e949bee4" 00:10:44.463 ], 00:10:44.463 "product_name": "Malloc disk", 00:10:44.463 "block_size": 512, 00:10:44.463 "num_blocks": 65536, 00:10:44.463 "uuid": "3cca29bf-39e6-4e6e-a257-acd7e949bee4", 00:10:44.463 "assigned_rate_limits": { 00:10:44.463 "rw_ios_per_sec": 0, 00:10:44.463 "rw_mbytes_per_sec": 0, 00:10:44.463 "r_mbytes_per_sec": 0, 00:10:44.463 "w_mbytes_per_sec": 0 00:10:44.463 }, 00:10:44.463 "claimed": true, 00:10:44.463 "claim_type": "exclusive_write", 00:10:44.463 "zoned": false, 00:10:44.463 "supported_io_types": { 00:10:44.463 "read": true, 00:10:44.463 "write": true, 00:10:44.463 "unmap": true, 00:10:44.463 "flush": true, 00:10:44.463 "reset": true, 00:10:44.463 "nvme_admin": false, 00:10:44.463 "nvme_io": false, 00:10:44.463 "nvme_io_md": false, 00:10:44.463 "write_zeroes": true, 00:10:44.463 "zcopy": true, 00:10:44.463 "get_zone_info": false, 00:10:44.463 "zone_management": false, 00:10:44.463 "zone_append": false, 00:10:44.463 "compare": false, 00:10:44.463 "compare_and_write": false, 00:10:44.463 "abort": true, 00:10:44.463 "seek_hole": false, 00:10:44.463 "seek_data": false, 00:10:44.463 "copy": true, 00:10:44.463 "nvme_iov_md": false 00:10:44.463 }, 00:10:44.463 "memory_domains": [ 00:10:44.463 { 00:10:44.463 "dma_device_id": "system", 00:10:44.463 "dma_device_type": 1 00:10:44.463 }, 00:10:44.463 { 00:10:44.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.463 "dma_device_type": 2 00:10:44.463 } 00:10:44.463 ], 00:10:44.463 "driver_specific": {} 00:10:44.463 } 00:10:44.463 ] 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.463 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.722 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.722 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.722 "name": "Existed_Raid", 00:10:44.722 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:44.722 "strip_size_kb": 64, 00:10:44.722 "state": "configuring", 00:10:44.722 "raid_level": "concat", 00:10:44.722 "superblock": true, 00:10:44.722 "num_base_bdevs": 4, 00:10:44.722 "num_base_bdevs_discovered": 2, 00:10:44.722 "num_base_bdevs_operational": 4, 00:10:44.722 "base_bdevs_list": [ 00:10:44.722 { 00:10:44.722 "name": "BaseBdev1", 00:10:44.722 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:44.722 "is_configured": true, 00:10:44.722 "data_offset": 2048, 00:10:44.722 "data_size": 63488 00:10:44.722 }, 00:10:44.722 { 00:10:44.722 "name": "BaseBdev2", 00:10:44.722 "uuid": "3cca29bf-39e6-4e6e-a257-acd7e949bee4", 00:10:44.722 "is_configured": true, 00:10:44.722 "data_offset": 2048, 00:10:44.722 "data_size": 63488 00:10:44.722 }, 00:10:44.722 { 00:10:44.722 "name": "BaseBdev3", 00:10:44.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.722 "is_configured": false, 00:10:44.722 "data_offset": 0, 00:10:44.722 "data_size": 0 00:10:44.722 }, 00:10:44.722 { 00:10:44.722 "name": "BaseBdev4", 00:10:44.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.722 "is_configured": false, 00:10:44.722 "data_offset": 0, 00:10:44.722 "data_size": 0 00:10:44.722 } 00:10:44.722 ] 00:10:44.722 }' 00:10:44.722 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.722 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.980 [2024-11-18 10:39:10.827999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.980 BaseBdev3 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.980 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.980 [ 00:10:44.980 { 00:10:44.980 "name": "BaseBdev3", 00:10:44.980 "aliases": [ 00:10:44.980 "cc2b6099-63c1-4d31-9bd7-a7aef75460a4" 00:10:44.980 ], 00:10:44.980 "product_name": "Malloc disk", 00:10:44.980 "block_size": 512, 00:10:44.980 "num_blocks": 65536, 00:10:44.980 "uuid": "cc2b6099-63c1-4d31-9bd7-a7aef75460a4", 00:10:44.980 "assigned_rate_limits": { 00:10:44.980 "rw_ios_per_sec": 0, 00:10:44.980 "rw_mbytes_per_sec": 0, 00:10:44.980 "r_mbytes_per_sec": 0, 00:10:44.980 "w_mbytes_per_sec": 0 00:10:44.980 }, 00:10:44.980 "claimed": true, 00:10:44.980 "claim_type": "exclusive_write", 00:10:44.980 "zoned": false, 00:10:44.980 "supported_io_types": { 00:10:44.980 "read": true, 00:10:44.980 "write": true, 00:10:44.980 "unmap": true, 00:10:44.980 "flush": true, 00:10:44.980 "reset": true, 00:10:44.980 "nvme_admin": false, 00:10:44.980 "nvme_io": false, 00:10:44.980 "nvme_io_md": false, 00:10:44.980 "write_zeroes": true, 00:10:44.980 "zcopy": true, 00:10:44.980 "get_zone_info": false, 00:10:44.980 "zone_management": false, 00:10:44.980 "zone_append": false, 00:10:44.980 "compare": false, 00:10:44.980 "compare_and_write": false, 00:10:45.238 "abort": true, 00:10:45.238 "seek_hole": false, 00:10:45.238 "seek_data": false, 00:10:45.238 "copy": true, 00:10:45.238 "nvme_iov_md": false 00:10:45.238 }, 00:10:45.238 "memory_domains": [ 00:10:45.238 { 00:10:45.238 "dma_device_id": "system", 00:10:45.238 "dma_device_type": 1 00:10:45.238 }, 00:10:45.238 { 00:10:45.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.238 "dma_device_type": 2 00:10:45.238 } 00:10:45.238 ], 00:10:45.238 "driver_specific": {} 00:10:45.238 } 00:10:45.238 ] 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.238 "name": "Existed_Raid", 00:10:45.238 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:45.238 "strip_size_kb": 64, 00:10:45.238 "state": "configuring", 00:10:45.238 "raid_level": "concat", 00:10:45.238 "superblock": true, 00:10:45.238 "num_base_bdevs": 4, 00:10:45.238 "num_base_bdevs_discovered": 3, 00:10:45.238 "num_base_bdevs_operational": 4, 00:10:45.238 "base_bdevs_list": [ 00:10:45.238 { 00:10:45.238 "name": "BaseBdev1", 00:10:45.238 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:45.238 "is_configured": true, 00:10:45.238 "data_offset": 2048, 00:10:45.238 "data_size": 63488 00:10:45.238 }, 00:10:45.238 { 00:10:45.238 "name": "BaseBdev2", 00:10:45.238 "uuid": "3cca29bf-39e6-4e6e-a257-acd7e949bee4", 00:10:45.238 "is_configured": true, 00:10:45.238 "data_offset": 2048, 00:10:45.238 "data_size": 63488 00:10:45.238 }, 00:10:45.238 { 00:10:45.238 "name": "BaseBdev3", 00:10:45.238 "uuid": "cc2b6099-63c1-4d31-9bd7-a7aef75460a4", 00:10:45.238 "is_configured": true, 00:10:45.238 "data_offset": 2048, 00:10:45.238 "data_size": 63488 00:10:45.238 }, 00:10:45.238 { 00:10:45.238 "name": "BaseBdev4", 00:10:45.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.238 "is_configured": false, 00:10:45.238 "data_offset": 0, 00:10:45.238 "data_size": 0 00:10:45.238 } 00:10:45.238 ] 00:10:45.238 }' 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.238 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.498 [2024-11-18 10:39:11.301336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.498 [2024-11-18 10:39:11.301603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.498 [2024-11-18 10:39:11.301620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:45.498 BaseBdev4 00:10:45.498 [2024-11-18 10:39:11.301914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:45.498 [2024-11-18 10:39:11.302081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.498 [2024-11-18 10:39:11.302095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:45.498 [2024-11-18 10:39:11.302282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.498 [ 00:10:45.498 { 00:10:45.498 "name": "BaseBdev4", 00:10:45.498 "aliases": [ 00:10:45.498 "b9547aea-aba9-416f-8dc3-e68468dbe245" 00:10:45.498 ], 00:10:45.498 "product_name": "Malloc disk", 00:10:45.498 "block_size": 512, 00:10:45.498 "num_blocks": 65536, 00:10:45.498 "uuid": "b9547aea-aba9-416f-8dc3-e68468dbe245", 00:10:45.498 "assigned_rate_limits": { 00:10:45.498 "rw_ios_per_sec": 0, 00:10:45.498 "rw_mbytes_per_sec": 0, 00:10:45.498 "r_mbytes_per_sec": 0, 00:10:45.498 "w_mbytes_per_sec": 0 00:10:45.498 }, 00:10:45.498 "claimed": true, 00:10:45.498 "claim_type": "exclusive_write", 00:10:45.498 "zoned": false, 00:10:45.498 "supported_io_types": { 00:10:45.498 "read": true, 00:10:45.498 "write": true, 00:10:45.498 "unmap": true, 00:10:45.498 "flush": true, 00:10:45.498 "reset": true, 00:10:45.498 "nvme_admin": false, 00:10:45.498 "nvme_io": false, 00:10:45.498 "nvme_io_md": false, 00:10:45.498 "write_zeroes": true, 00:10:45.498 "zcopy": true, 00:10:45.498 "get_zone_info": false, 00:10:45.498 "zone_management": false, 00:10:45.498 "zone_append": false, 00:10:45.498 "compare": false, 00:10:45.498 "compare_and_write": false, 00:10:45.498 "abort": true, 00:10:45.498 "seek_hole": false, 00:10:45.498 "seek_data": false, 00:10:45.498 "copy": true, 00:10:45.498 "nvme_iov_md": false 00:10:45.498 }, 00:10:45.498 "memory_domains": [ 00:10:45.498 { 00:10:45.498 "dma_device_id": "system", 00:10:45.498 "dma_device_type": 1 00:10:45.498 }, 00:10:45.498 { 00:10:45.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.498 "dma_device_type": 2 00:10:45.498 } 00:10:45.498 ], 00:10:45.498 "driver_specific": {} 00:10:45.498 } 00:10:45.498 ] 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.498 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.499 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.758 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.758 "name": "Existed_Raid", 00:10:45.758 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:45.758 "strip_size_kb": 64, 00:10:45.758 "state": "online", 00:10:45.758 "raid_level": "concat", 00:10:45.758 "superblock": true, 00:10:45.758 "num_base_bdevs": 4, 00:10:45.758 "num_base_bdevs_discovered": 4, 00:10:45.758 "num_base_bdevs_operational": 4, 00:10:45.758 "base_bdevs_list": [ 00:10:45.758 { 00:10:45.758 "name": "BaseBdev1", 00:10:45.758 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:45.758 "is_configured": true, 00:10:45.758 "data_offset": 2048, 00:10:45.758 "data_size": 63488 00:10:45.758 }, 00:10:45.758 { 00:10:45.758 "name": "BaseBdev2", 00:10:45.758 "uuid": "3cca29bf-39e6-4e6e-a257-acd7e949bee4", 00:10:45.758 "is_configured": true, 00:10:45.758 "data_offset": 2048, 00:10:45.758 "data_size": 63488 00:10:45.758 }, 00:10:45.758 { 00:10:45.758 "name": "BaseBdev3", 00:10:45.758 "uuid": "cc2b6099-63c1-4d31-9bd7-a7aef75460a4", 00:10:45.758 "is_configured": true, 00:10:45.758 "data_offset": 2048, 00:10:45.758 "data_size": 63488 00:10:45.758 }, 00:10:45.758 { 00:10:45.758 "name": "BaseBdev4", 00:10:45.758 "uuid": "b9547aea-aba9-416f-8dc3-e68468dbe245", 00:10:45.758 "is_configured": true, 00:10:45.758 "data_offset": 2048, 00:10:45.758 "data_size": 63488 00:10:45.758 } 00:10:45.758 ] 00:10:45.758 }' 00:10:45.758 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.758 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.018 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.019 [2024-11-18 10:39:11.784870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.019 "name": "Existed_Raid", 00:10:46.019 "aliases": [ 00:10:46.019 "36a0e183-74e2-4c29-a66c-3c099aa23761" 00:10:46.019 ], 00:10:46.019 "product_name": "Raid Volume", 00:10:46.019 "block_size": 512, 00:10:46.019 "num_blocks": 253952, 00:10:46.019 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:46.019 "assigned_rate_limits": { 00:10:46.019 "rw_ios_per_sec": 0, 00:10:46.019 "rw_mbytes_per_sec": 0, 00:10:46.019 "r_mbytes_per_sec": 0, 00:10:46.019 "w_mbytes_per_sec": 0 00:10:46.019 }, 00:10:46.019 "claimed": false, 00:10:46.019 "zoned": false, 00:10:46.019 "supported_io_types": { 00:10:46.019 "read": true, 00:10:46.019 "write": true, 00:10:46.019 "unmap": true, 00:10:46.019 "flush": true, 00:10:46.019 "reset": true, 00:10:46.019 "nvme_admin": false, 00:10:46.019 "nvme_io": false, 00:10:46.019 "nvme_io_md": false, 00:10:46.019 "write_zeroes": true, 00:10:46.019 "zcopy": false, 00:10:46.019 "get_zone_info": false, 00:10:46.019 "zone_management": false, 00:10:46.019 "zone_append": false, 00:10:46.019 "compare": false, 00:10:46.019 "compare_and_write": false, 00:10:46.019 "abort": false, 00:10:46.019 "seek_hole": false, 00:10:46.019 "seek_data": false, 00:10:46.019 "copy": false, 00:10:46.019 "nvme_iov_md": false 00:10:46.019 }, 00:10:46.019 "memory_domains": [ 00:10:46.019 { 00:10:46.019 "dma_device_id": "system", 00:10:46.019 "dma_device_type": 1 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.019 "dma_device_type": 2 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "system", 00:10:46.019 "dma_device_type": 1 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.019 "dma_device_type": 2 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "system", 00:10:46.019 "dma_device_type": 1 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.019 "dma_device_type": 2 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "system", 00:10:46.019 "dma_device_type": 1 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.019 "dma_device_type": 2 00:10:46.019 } 00:10:46.019 ], 00:10:46.019 "driver_specific": { 00:10:46.019 "raid": { 00:10:46.019 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:46.019 "strip_size_kb": 64, 00:10:46.019 "state": "online", 00:10:46.019 "raid_level": "concat", 00:10:46.019 "superblock": true, 00:10:46.019 "num_base_bdevs": 4, 00:10:46.019 "num_base_bdevs_discovered": 4, 00:10:46.019 "num_base_bdevs_operational": 4, 00:10:46.019 "base_bdevs_list": [ 00:10:46.019 { 00:10:46.019 "name": "BaseBdev1", 00:10:46.019 "uuid": "b0df8345-f134-42b2-886f-7b50855308fe", 00:10:46.019 "is_configured": true, 00:10:46.019 "data_offset": 2048, 00:10:46.019 "data_size": 63488 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "name": "BaseBdev2", 00:10:46.019 "uuid": "3cca29bf-39e6-4e6e-a257-acd7e949bee4", 00:10:46.019 "is_configured": true, 00:10:46.019 "data_offset": 2048, 00:10:46.019 "data_size": 63488 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "name": "BaseBdev3", 00:10:46.019 "uuid": "cc2b6099-63c1-4d31-9bd7-a7aef75460a4", 00:10:46.019 "is_configured": true, 00:10:46.019 "data_offset": 2048, 00:10:46.019 "data_size": 63488 00:10:46.019 }, 00:10:46.019 { 00:10:46.019 "name": "BaseBdev4", 00:10:46.019 "uuid": "b9547aea-aba9-416f-8dc3-e68468dbe245", 00:10:46.019 "is_configured": true, 00:10:46.019 "data_offset": 2048, 00:10:46.019 "data_size": 63488 00:10:46.019 } 00:10:46.019 ] 00:10:46.019 } 00:10:46.019 } 00:10:46.019 }' 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:46.019 BaseBdev2 00:10:46.019 BaseBdev3 00:10:46.019 BaseBdev4' 00:10:46.019 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.279 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.279 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.279 [2024-11-18 10:39:12.119993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.280 [2024-11-18 10:39:12.120024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.280 [2024-11-18 10:39:12.120076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.540 "name": "Existed_Raid", 00:10:46.540 "uuid": "36a0e183-74e2-4c29-a66c-3c099aa23761", 00:10:46.540 "strip_size_kb": 64, 00:10:46.540 "state": "offline", 00:10:46.540 "raid_level": "concat", 00:10:46.540 "superblock": true, 00:10:46.540 "num_base_bdevs": 4, 00:10:46.540 "num_base_bdevs_discovered": 3, 00:10:46.540 "num_base_bdevs_operational": 3, 00:10:46.540 "base_bdevs_list": [ 00:10:46.540 { 00:10:46.540 "name": null, 00:10:46.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.540 "is_configured": false, 00:10:46.540 "data_offset": 0, 00:10:46.540 "data_size": 63488 00:10:46.540 }, 00:10:46.540 { 00:10:46.540 "name": "BaseBdev2", 00:10:46.540 "uuid": "3cca29bf-39e6-4e6e-a257-acd7e949bee4", 00:10:46.540 "is_configured": true, 00:10:46.540 "data_offset": 2048, 00:10:46.540 "data_size": 63488 00:10:46.540 }, 00:10:46.540 { 00:10:46.540 "name": "BaseBdev3", 00:10:46.540 "uuid": "cc2b6099-63c1-4d31-9bd7-a7aef75460a4", 00:10:46.540 "is_configured": true, 00:10:46.540 "data_offset": 2048, 00:10:46.540 "data_size": 63488 00:10:46.540 }, 00:10:46.540 { 00:10:46.540 "name": "BaseBdev4", 00:10:46.540 "uuid": "b9547aea-aba9-416f-8dc3-e68468dbe245", 00:10:46.540 "is_configured": true, 00:10:46.540 "data_offset": 2048, 00:10:46.540 "data_size": 63488 00:10:46.540 } 00:10:46.540 ] 00:10:46.540 }' 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.540 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.799 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.060 [2024-11-18 10:39:12.687794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.060 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.060 [2024-11-18 10:39:12.847297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.320 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.321 [2024-11-18 10:39:13.002897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:47.321 [2024-11-18 10:39:13.003033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.321 BaseBdev2 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.321 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.582 [ 00:10:47.582 { 00:10:47.582 "name": "BaseBdev2", 00:10:47.582 "aliases": [ 00:10:47.582 "e80ae3db-bd1b-4641-9898-5f66d64cf96b" 00:10:47.582 ], 00:10:47.582 "product_name": "Malloc disk", 00:10:47.582 "block_size": 512, 00:10:47.582 "num_blocks": 65536, 00:10:47.582 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:47.582 "assigned_rate_limits": { 00:10:47.582 "rw_ios_per_sec": 0, 00:10:47.582 "rw_mbytes_per_sec": 0, 00:10:47.582 "r_mbytes_per_sec": 0, 00:10:47.582 "w_mbytes_per_sec": 0 00:10:47.582 }, 00:10:47.582 "claimed": false, 00:10:47.582 "zoned": false, 00:10:47.582 "supported_io_types": { 00:10:47.582 "read": true, 00:10:47.582 "write": true, 00:10:47.582 "unmap": true, 00:10:47.582 "flush": true, 00:10:47.582 "reset": true, 00:10:47.582 "nvme_admin": false, 00:10:47.582 "nvme_io": false, 00:10:47.582 "nvme_io_md": false, 00:10:47.582 "write_zeroes": true, 00:10:47.582 "zcopy": true, 00:10:47.582 "get_zone_info": false, 00:10:47.582 "zone_management": false, 00:10:47.582 "zone_append": false, 00:10:47.582 "compare": false, 00:10:47.582 "compare_and_write": false, 00:10:47.582 "abort": true, 00:10:47.582 "seek_hole": false, 00:10:47.582 "seek_data": false, 00:10:47.582 "copy": true, 00:10:47.582 "nvme_iov_md": false 00:10:47.582 }, 00:10:47.582 "memory_domains": [ 00:10:47.582 { 00:10:47.582 "dma_device_id": "system", 00:10:47.582 "dma_device_type": 1 00:10:47.582 }, 00:10:47.582 { 00:10:47.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.582 "dma_device_type": 2 00:10:47.582 } 00:10:47.582 ], 00:10:47.582 "driver_specific": {} 00:10:47.582 } 00:10:47.582 ] 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.582 BaseBdev3 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.582 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.582 [ 00:10:47.582 { 00:10:47.582 "name": "BaseBdev3", 00:10:47.582 "aliases": [ 00:10:47.582 "b64708ea-acea-483d-9422-5c8143fa3678" 00:10:47.582 ], 00:10:47.582 "product_name": "Malloc disk", 00:10:47.582 "block_size": 512, 00:10:47.582 "num_blocks": 65536, 00:10:47.582 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:47.582 "assigned_rate_limits": { 00:10:47.582 "rw_ios_per_sec": 0, 00:10:47.582 "rw_mbytes_per_sec": 0, 00:10:47.582 "r_mbytes_per_sec": 0, 00:10:47.582 "w_mbytes_per_sec": 0 00:10:47.582 }, 00:10:47.582 "claimed": false, 00:10:47.582 "zoned": false, 00:10:47.582 "supported_io_types": { 00:10:47.582 "read": true, 00:10:47.582 "write": true, 00:10:47.582 "unmap": true, 00:10:47.582 "flush": true, 00:10:47.582 "reset": true, 00:10:47.582 "nvme_admin": false, 00:10:47.582 "nvme_io": false, 00:10:47.582 "nvme_io_md": false, 00:10:47.582 "write_zeroes": true, 00:10:47.582 "zcopy": true, 00:10:47.582 "get_zone_info": false, 00:10:47.582 "zone_management": false, 00:10:47.582 "zone_append": false, 00:10:47.582 "compare": false, 00:10:47.582 "compare_and_write": false, 00:10:47.582 "abort": true, 00:10:47.582 "seek_hole": false, 00:10:47.582 "seek_data": false, 00:10:47.583 "copy": true, 00:10:47.583 "nvme_iov_md": false 00:10:47.583 }, 00:10:47.583 "memory_domains": [ 00:10:47.583 { 00:10:47.583 "dma_device_id": "system", 00:10:47.583 "dma_device_type": 1 00:10:47.583 }, 00:10:47.583 { 00:10:47.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.583 "dma_device_type": 2 00:10:47.583 } 00:10:47.583 ], 00:10:47.583 "driver_specific": {} 00:10:47.583 } 00:10:47.583 ] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 BaseBdev4 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 [ 00:10:47.583 { 00:10:47.583 "name": "BaseBdev4", 00:10:47.583 "aliases": [ 00:10:47.583 "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1" 00:10:47.583 ], 00:10:47.583 "product_name": "Malloc disk", 00:10:47.583 "block_size": 512, 00:10:47.583 "num_blocks": 65536, 00:10:47.583 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:47.583 "assigned_rate_limits": { 00:10:47.583 "rw_ios_per_sec": 0, 00:10:47.583 "rw_mbytes_per_sec": 0, 00:10:47.583 "r_mbytes_per_sec": 0, 00:10:47.583 "w_mbytes_per_sec": 0 00:10:47.583 }, 00:10:47.583 "claimed": false, 00:10:47.583 "zoned": false, 00:10:47.583 "supported_io_types": { 00:10:47.583 "read": true, 00:10:47.583 "write": true, 00:10:47.583 "unmap": true, 00:10:47.583 "flush": true, 00:10:47.583 "reset": true, 00:10:47.583 "nvme_admin": false, 00:10:47.583 "nvme_io": false, 00:10:47.583 "nvme_io_md": false, 00:10:47.583 "write_zeroes": true, 00:10:47.583 "zcopy": true, 00:10:47.583 "get_zone_info": false, 00:10:47.583 "zone_management": false, 00:10:47.583 "zone_append": false, 00:10:47.583 "compare": false, 00:10:47.583 "compare_and_write": false, 00:10:47.583 "abort": true, 00:10:47.583 "seek_hole": false, 00:10:47.583 "seek_data": false, 00:10:47.583 "copy": true, 00:10:47.583 "nvme_iov_md": false 00:10:47.583 }, 00:10:47.583 "memory_domains": [ 00:10:47.583 { 00:10:47.583 "dma_device_id": "system", 00:10:47.583 "dma_device_type": 1 00:10:47.583 }, 00:10:47.583 { 00:10:47.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.583 "dma_device_type": 2 00:10:47.583 } 00:10:47.583 ], 00:10:47.583 "driver_specific": {} 00:10:47.583 } 00:10:47.583 ] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 [2024-11-18 10:39:13.407066] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.583 [2024-11-18 10:39:13.407216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.583 [2024-11-18 10:39:13.407264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.583 [2024-11-18 10:39:13.409270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.583 [2024-11-18 10:39:13.409361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.583 "name": "Existed_Raid", 00:10:47.583 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:47.583 "strip_size_kb": 64, 00:10:47.583 "state": "configuring", 00:10:47.583 "raid_level": "concat", 00:10:47.583 "superblock": true, 00:10:47.583 "num_base_bdevs": 4, 00:10:47.583 "num_base_bdevs_discovered": 3, 00:10:47.583 "num_base_bdevs_operational": 4, 00:10:47.583 "base_bdevs_list": [ 00:10:47.583 { 00:10:47.583 "name": "BaseBdev1", 00:10:47.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.583 "is_configured": false, 00:10:47.583 "data_offset": 0, 00:10:47.583 "data_size": 0 00:10:47.583 }, 00:10:47.583 { 00:10:47.583 "name": "BaseBdev2", 00:10:47.583 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:47.583 "is_configured": true, 00:10:47.583 "data_offset": 2048, 00:10:47.583 "data_size": 63488 00:10:47.583 }, 00:10:47.583 { 00:10:47.583 "name": "BaseBdev3", 00:10:47.583 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:47.583 "is_configured": true, 00:10:47.583 "data_offset": 2048, 00:10:47.583 "data_size": 63488 00:10:47.583 }, 00:10:47.583 { 00:10:47.583 "name": "BaseBdev4", 00:10:47.583 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:47.583 "is_configured": true, 00:10:47.583 "data_offset": 2048, 00:10:47.583 "data_size": 63488 00:10:47.583 } 00:10:47.583 ] 00:10:47.583 }' 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.583 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.229 [2024-11-18 10:39:13.858270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.229 "name": "Existed_Raid", 00:10:48.229 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:48.229 "strip_size_kb": 64, 00:10:48.229 "state": "configuring", 00:10:48.229 "raid_level": "concat", 00:10:48.229 "superblock": true, 00:10:48.229 "num_base_bdevs": 4, 00:10:48.229 "num_base_bdevs_discovered": 2, 00:10:48.229 "num_base_bdevs_operational": 4, 00:10:48.229 "base_bdevs_list": [ 00:10:48.229 { 00:10:48.229 "name": "BaseBdev1", 00:10:48.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.229 "is_configured": false, 00:10:48.229 "data_offset": 0, 00:10:48.229 "data_size": 0 00:10:48.229 }, 00:10:48.229 { 00:10:48.229 "name": null, 00:10:48.229 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:48.229 "is_configured": false, 00:10:48.229 "data_offset": 0, 00:10:48.229 "data_size": 63488 00:10:48.229 }, 00:10:48.229 { 00:10:48.229 "name": "BaseBdev3", 00:10:48.229 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:48.229 "is_configured": true, 00:10:48.229 "data_offset": 2048, 00:10:48.229 "data_size": 63488 00:10:48.229 }, 00:10:48.229 { 00:10:48.229 "name": "BaseBdev4", 00:10:48.229 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:48.229 "is_configured": true, 00:10:48.229 "data_offset": 2048, 00:10:48.229 "data_size": 63488 00:10:48.229 } 00:10:48.229 ] 00:10:48.229 }' 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.229 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.489 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 [2024-11-18 10:39:14.387866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.750 BaseBdev1 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 [ 00:10:48.750 { 00:10:48.750 "name": "BaseBdev1", 00:10:48.750 "aliases": [ 00:10:48.750 "9516fd15-b72c-4d2a-8913-f869111f0a2f" 00:10:48.750 ], 00:10:48.750 "product_name": "Malloc disk", 00:10:48.750 "block_size": 512, 00:10:48.750 "num_blocks": 65536, 00:10:48.750 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:48.750 "assigned_rate_limits": { 00:10:48.750 "rw_ios_per_sec": 0, 00:10:48.750 "rw_mbytes_per_sec": 0, 00:10:48.750 "r_mbytes_per_sec": 0, 00:10:48.750 "w_mbytes_per_sec": 0 00:10:48.750 }, 00:10:48.750 "claimed": true, 00:10:48.750 "claim_type": "exclusive_write", 00:10:48.750 "zoned": false, 00:10:48.750 "supported_io_types": { 00:10:48.750 "read": true, 00:10:48.750 "write": true, 00:10:48.750 "unmap": true, 00:10:48.750 "flush": true, 00:10:48.750 "reset": true, 00:10:48.750 "nvme_admin": false, 00:10:48.750 "nvme_io": false, 00:10:48.750 "nvme_io_md": false, 00:10:48.750 "write_zeroes": true, 00:10:48.750 "zcopy": true, 00:10:48.750 "get_zone_info": false, 00:10:48.750 "zone_management": false, 00:10:48.750 "zone_append": false, 00:10:48.750 "compare": false, 00:10:48.750 "compare_and_write": false, 00:10:48.750 "abort": true, 00:10:48.750 "seek_hole": false, 00:10:48.750 "seek_data": false, 00:10:48.750 "copy": true, 00:10:48.750 "nvme_iov_md": false 00:10:48.750 }, 00:10:48.750 "memory_domains": [ 00:10:48.750 { 00:10:48.750 "dma_device_id": "system", 00:10:48.750 "dma_device_type": 1 00:10:48.750 }, 00:10:48.750 { 00:10:48.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.750 "dma_device_type": 2 00:10:48.750 } 00:10:48.750 ], 00:10:48.750 "driver_specific": {} 00:10:48.750 } 00:10:48.750 ] 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.750 "name": "Existed_Raid", 00:10:48.750 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:48.750 "strip_size_kb": 64, 00:10:48.750 "state": "configuring", 00:10:48.750 "raid_level": "concat", 00:10:48.750 "superblock": true, 00:10:48.750 "num_base_bdevs": 4, 00:10:48.750 "num_base_bdevs_discovered": 3, 00:10:48.750 "num_base_bdevs_operational": 4, 00:10:48.750 "base_bdevs_list": [ 00:10:48.750 { 00:10:48.750 "name": "BaseBdev1", 00:10:48.750 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:48.750 "is_configured": true, 00:10:48.750 "data_offset": 2048, 00:10:48.750 "data_size": 63488 00:10:48.750 }, 00:10:48.750 { 00:10:48.750 "name": null, 00:10:48.750 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:48.750 "is_configured": false, 00:10:48.750 "data_offset": 0, 00:10:48.750 "data_size": 63488 00:10:48.750 }, 00:10:48.750 { 00:10:48.750 "name": "BaseBdev3", 00:10:48.750 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:48.750 "is_configured": true, 00:10:48.750 "data_offset": 2048, 00:10:48.750 "data_size": 63488 00:10:48.750 }, 00:10:48.750 { 00:10:48.750 "name": "BaseBdev4", 00:10:48.750 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:48.750 "is_configured": true, 00:10:48.750 "data_offset": 2048, 00:10:48.750 "data_size": 63488 00:10:48.750 } 00:10:48.750 ] 00:10:48.750 }' 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.750 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.011 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.271 [2024-11-18 10:39:14.895032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.271 "name": "Existed_Raid", 00:10:49.271 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:49.271 "strip_size_kb": 64, 00:10:49.271 "state": "configuring", 00:10:49.271 "raid_level": "concat", 00:10:49.271 "superblock": true, 00:10:49.271 "num_base_bdevs": 4, 00:10:49.271 "num_base_bdevs_discovered": 2, 00:10:49.271 "num_base_bdevs_operational": 4, 00:10:49.271 "base_bdevs_list": [ 00:10:49.271 { 00:10:49.271 "name": "BaseBdev1", 00:10:49.271 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:49.271 "is_configured": true, 00:10:49.271 "data_offset": 2048, 00:10:49.271 "data_size": 63488 00:10:49.271 }, 00:10:49.271 { 00:10:49.271 "name": null, 00:10:49.271 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:49.271 "is_configured": false, 00:10:49.271 "data_offset": 0, 00:10:49.271 "data_size": 63488 00:10:49.271 }, 00:10:49.271 { 00:10:49.271 "name": null, 00:10:49.271 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:49.271 "is_configured": false, 00:10:49.271 "data_offset": 0, 00:10:49.271 "data_size": 63488 00:10:49.271 }, 00:10:49.271 { 00:10:49.271 "name": "BaseBdev4", 00:10:49.271 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:49.271 "is_configured": true, 00:10:49.271 "data_offset": 2048, 00:10:49.271 "data_size": 63488 00:10:49.271 } 00:10:49.271 ] 00:10:49.271 }' 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.271 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.531 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 [2024-11-18 10:39:15.418142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.791 "name": "Existed_Raid", 00:10:49.791 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:49.791 "strip_size_kb": 64, 00:10:49.791 "state": "configuring", 00:10:49.791 "raid_level": "concat", 00:10:49.791 "superblock": true, 00:10:49.791 "num_base_bdevs": 4, 00:10:49.791 "num_base_bdevs_discovered": 3, 00:10:49.791 "num_base_bdevs_operational": 4, 00:10:49.791 "base_bdevs_list": [ 00:10:49.791 { 00:10:49.791 "name": "BaseBdev1", 00:10:49.791 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:49.791 "is_configured": true, 00:10:49.791 "data_offset": 2048, 00:10:49.791 "data_size": 63488 00:10:49.791 }, 00:10:49.791 { 00:10:49.791 "name": null, 00:10:49.791 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:49.791 "is_configured": false, 00:10:49.791 "data_offset": 0, 00:10:49.791 "data_size": 63488 00:10:49.791 }, 00:10:49.791 { 00:10:49.791 "name": "BaseBdev3", 00:10:49.791 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:49.791 "is_configured": true, 00:10:49.791 "data_offset": 2048, 00:10:49.791 "data_size": 63488 00:10:49.791 }, 00:10:49.791 { 00:10:49.791 "name": "BaseBdev4", 00:10:49.791 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:49.791 "is_configured": true, 00:10:49.791 "data_offset": 2048, 00:10:49.791 "data_size": 63488 00:10:49.791 } 00:10:49.791 ] 00:10:49.791 }' 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.791 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.050 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.050 [2024-11-18 10:39:15.909326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.310 "name": "Existed_Raid", 00:10:50.310 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:50.310 "strip_size_kb": 64, 00:10:50.310 "state": "configuring", 00:10:50.310 "raid_level": "concat", 00:10:50.310 "superblock": true, 00:10:50.310 "num_base_bdevs": 4, 00:10:50.310 "num_base_bdevs_discovered": 2, 00:10:50.310 "num_base_bdevs_operational": 4, 00:10:50.310 "base_bdevs_list": [ 00:10:50.310 { 00:10:50.310 "name": null, 00:10:50.310 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:50.310 "is_configured": false, 00:10:50.310 "data_offset": 0, 00:10:50.310 "data_size": 63488 00:10:50.310 }, 00:10:50.310 { 00:10:50.310 "name": null, 00:10:50.310 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:50.310 "is_configured": false, 00:10:50.310 "data_offset": 0, 00:10:50.310 "data_size": 63488 00:10:50.310 }, 00:10:50.310 { 00:10:50.310 "name": "BaseBdev3", 00:10:50.310 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:50.310 "is_configured": true, 00:10:50.310 "data_offset": 2048, 00:10:50.310 "data_size": 63488 00:10:50.310 }, 00:10:50.310 { 00:10:50.310 "name": "BaseBdev4", 00:10:50.310 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:50.310 "is_configured": true, 00:10:50.310 "data_offset": 2048, 00:10:50.310 "data_size": 63488 00:10:50.310 } 00:10:50.310 ] 00:10:50.310 }' 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.310 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.570 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.570 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.570 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.570 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.830 [2024-11-18 10:39:16.499842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.830 "name": "Existed_Raid", 00:10:50.830 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:50.830 "strip_size_kb": 64, 00:10:50.830 "state": "configuring", 00:10:50.830 "raid_level": "concat", 00:10:50.830 "superblock": true, 00:10:50.830 "num_base_bdevs": 4, 00:10:50.830 "num_base_bdevs_discovered": 3, 00:10:50.830 "num_base_bdevs_operational": 4, 00:10:50.830 "base_bdevs_list": [ 00:10:50.830 { 00:10:50.830 "name": null, 00:10:50.830 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:50.830 "is_configured": false, 00:10:50.830 "data_offset": 0, 00:10:50.830 "data_size": 63488 00:10:50.830 }, 00:10:50.830 { 00:10:50.830 "name": "BaseBdev2", 00:10:50.830 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:50.830 "is_configured": true, 00:10:50.830 "data_offset": 2048, 00:10:50.830 "data_size": 63488 00:10:50.830 }, 00:10:50.830 { 00:10:50.830 "name": "BaseBdev3", 00:10:50.830 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:50.830 "is_configured": true, 00:10:50.830 "data_offset": 2048, 00:10:50.830 "data_size": 63488 00:10:50.830 }, 00:10:50.830 { 00:10:50.830 "name": "BaseBdev4", 00:10:50.830 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:50.830 "is_configured": true, 00:10:50.830 "data_offset": 2048, 00:10:50.830 "data_size": 63488 00:10:50.830 } 00:10:50.830 ] 00:10:50.830 }' 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.830 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.090 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.090 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.090 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.090 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.090 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.350 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:51.350 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.350 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:51.350 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.350 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9516fd15-b72c-4d2a-8913-f869111f0a2f 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.350 [2024-11-18 10:39:17.061088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:51.350 [2024-11-18 10:39:17.061379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.350 [2024-11-18 10:39:17.061394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.350 NewBaseBdev 00:10:51.350 [2024-11-18 10:39:17.061679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:51.350 [2024-11-18 10:39:17.061834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.350 [2024-11-18 10:39:17.061847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:51.350 [2024-11-18 10:39:17.061985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.350 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.350 [ 00:10:51.350 { 00:10:51.350 "name": "NewBaseBdev", 00:10:51.350 "aliases": [ 00:10:51.350 "9516fd15-b72c-4d2a-8913-f869111f0a2f" 00:10:51.350 ], 00:10:51.350 "product_name": "Malloc disk", 00:10:51.350 "block_size": 512, 00:10:51.350 "num_blocks": 65536, 00:10:51.350 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:51.350 "assigned_rate_limits": { 00:10:51.350 "rw_ios_per_sec": 0, 00:10:51.350 "rw_mbytes_per_sec": 0, 00:10:51.350 "r_mbytes_per_sec": 0, 00:10:51.350 "w_mbytes_per_sec": 0 00:10:51.350 }, 00:10:51.350 "claimed": true, 00:10:51.350 "claim_type": "exclusive_write", 00:10:51.350 "zoned": false, 00:10:51.350 "supported_io_types": { 00:10:51.350 "read": true, 00:10:51.350 "write": true, 00:10:51.350 "unmap": true, 00:10:51.350 "flush": true, 00:10:51.350 "reset": true, 00:10:51.350 "nvme_admin": false, 00:10:51.350 "nvme_io": false, 00:10:51.350 "nvme_io_md": false, 00:10:51.350 "write_zeroes": true, 00:10:51.350 "zcopy": true, 00:10:51.350 "get_zone_info": false, 00:10:51.350 "zone_management": false, 00:10:51.350 "zone_append": false, 00:10:51.350 "compare": false, 00:10:51.350 "compare_and_write": false, 00:10:51.350 "abort": true, 00:10:51.350 "seek_hole": false, 00:10:51.350 "seek_data": false, 00:10:51.350 "copy": true, 00:10:51.350 "nvme_iov_md": false 00:10:51.350 }, 00:10:51.351 "memory_domains": [ 00:10:51.351 { 00:10:51.351 "dma_device_id": "system", 00:10:51.351 "dma_device_type": 1 00:10:51.351 }, 00:10:51.351 { 00:10:51.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.351 "dma_device_type": 2 00:10:51.351 } 00:10:51.351 ], 00:10:51.351 "driver_specific": {} 00:10:51.351 } 00:10:51.351 ] 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.351 "name": "Existed_Raid", 00:10:51.351 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:51.351 "strip_size_kb": 64, 00:10:51.351 "state": "online", 00:10:51.351 "raid_level": "concat", 00:10:51.351 "superblock": true, 00:10:51.351 "num_base_bdevs": 4, 00:10:51.351 "num_base_bdevs_discovered": 4, 00:10:51.351 "num_base_bdevs_operational": 4, 00:10:51.351 "base_bdevs_list": [ 00:10:51.351 { 00:10:51.351 "name": "NewBaseBdev", 00:10:51.351 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:51.351 "is_configured": true, 00:10:51.351 "data_offset": 2048, 00:10:51.351 "data_size": 63488 00:10:51.351 }, 00:10:51.351 { 00:10:51.351 "name": "BaseBdev2", 00:10:51.351 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:51.351 "is_configured": true, 00:10:51.351 "data_offset": 2048, 00:10:51.351 "data_size": 63488 00:10:51.351 }, 00:10:51.351 { 00:10:51.351 "name": "BaseBdev3", 00:10:51.351 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:51.351 "is_configured": true, 00:10:51.351 "data_offset": 2048, 00:10:51.351 "data_size": 63488 00:10:51.351 }, 00:10:51.351 { 00:10:51.351 "name": "BaseBdev4", 00:10:51.351 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:51.351 "is_configured": true, 00:10:51.351 "data_offset": 2048, 00:10:51.351 "data_size": 63488 00:10:51.351 } 00:10:51.351 ] 00:10:51.351 }' 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.351 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.921 [2024-11-18 10:39:17.584548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.921 "name": "Existed_Raid", 00:10:51.921 "aliases": [ 00:10:51.921 "afc93b65-308d-4b1e-a5b6-06e11535b351" 00:10:51.921 ], 00:10:51.921 "product_name": "Raid Volume", 00:10:51.921 "block_size": 512, 00:10:51.921 "num_blocks": 253952, 00:10:51.921 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:51.921 "assigned_rate_limits": { 00:10:51.921 "rw_ios_per_sec": 0, 00:10:51.921 "rw_mbytes_per_sec": 0, 00:10:51.921 "r_mbytes_per_sec": 0, 00:10:51.921 "w_mbytes_per_sec": 0 00:10:51.921 }, 00:10:51.921 "claimed": false, 00:10:51.921 "zoned": false, 00:10:51.921 "supported_io_types": { 00:10:51.921 "read": true, 00:10:51.921 "write": true, 00:10:51.921 "unmap": true, 00:10:51.921 "flush": true, 00:10:51.921 "reset": true, 00:10:51.921 "nvme_admin": false, 00:10:51.921 "nvme_io": false, 00:10:51.921 "nvme_io_md": false, 00:10:51.921 "write_zeroes": true, 00:10:51.921 "zcopy": false, 00:10:51.921 "get_zone_info": false, 00:10:51.921 "zone_management": false, 00:10:51.921 "zone_append": false, 00:10:51.921 "compare": false, 00:10:51.921 "compare_and_write": false, 00:10:51.921 "abort": false, 00:10:51.921 "seek_hole": false, 00:10:51.921 "seek_data": false, 00:10:51.921 "copy": false, 00:10:51.921 "nvme_iov_md": false 00:10:51.921 }, 00:10:51.921 "memory_domains": [ 00:10:51.921 { 00:10:51.921 "dma_device_id": "system", 00:10:51.921 "dma_device_type": 1 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.921 "dma_device_type": 2 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "system", 00:10:51.921 "dma_device_type": 1 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.921 "dma_device_type": 2 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "system", 00:10:51.921 "dma_device_type": 1 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.921 "dma_device_type": 2 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "system", 00:10:51.921 "dma_device_type": 1 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.921 "dma_device_type": 2 00:10:51.921 } 00:10:51.921 ], 00:10:51.921 "driver_specific": { 00:10:51.921 "raid": { 00:10:51.921 "uuid": "afc93b65-308d-4b1e-a5b6-06e11535b351", 00:10:51.921 "strip_size_kb": 64, 00:10:51.921 "state": "online", 00:10:51.921 "raid_level": "concat", 00:10:51.921 "superblock": true, 00:10:51.921 "num_base_bdevs": 4, 00:10:51.921 "num_base_bdevs_discovered": 4, 00:10:51.921 "num_base_bdevs_operational": 4, 00:10:51.921 "base_bdevs_list": [ 00:10:51.921 { 00:10:51.921 "name": "NewBaseBdev", 00:10:51.921 "uuid": "9516fd15-b72c-4d2a-8913-f869111f0a2f", 00:10:51.921 "is_configured": true, 00:10:51.921 "data_offset": 2048, 00:10:51.921 "data_size": 63488 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "name": "BaseBdev2", 00:10:51.921 "uuid": "e80ae3db-bd1b-4641-9898-5f66d64cf96b", 00:10:51.921 "is_configured": true, 00:10:51.921 "data_offset": 2048, 00:10:51.921 "data_size": 63488 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "name": "BaseBdev3", 00:10:51.921 "uuid": "b64708ea-acea-483d-9422-5c8143fa3678", 00:10:51.921 "is_configured": true, 00:10:51.921 "data_offset": 2048, 00:10:51.921 "data_size": 63488 00:10:51.921 }, 00:10:51.921 { 00:10:51.921 "name": "BaseBdev4", 00:10:51.921 "uuid": "0c5bd4f8-c505-4af1-8ecf-b7dbb446d4f1", 00:10:51.921 "is_configured": true, 00:10:51.921 "data_offset": 2048, 00:10:51.921 "data_size": 63488 00:10:51.921 } 00:10:51.921 ] 00:10:51.921 } 00:10:51.921 } 00:10:51.921 }' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:51.921 BaseBdev2 00:10:51.921 BaseBdev3 00:10:51.921 BaseBdev4' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.921 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.922 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.922 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.922 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.182 [2024-11-18 10:39:17.871696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.182 [2024-11-18 10:39:17.871722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.182 [2024-11-18 10:39:17.871789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.182 [2024-11-18 10:39:17.871860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.182 [2024-11-18 10:39:17.871869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71808 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71808 ']' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71808 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71808 00:10:52.182 killing process with pid 71808 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71808' 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71808 00:10:52.182 [2024-11-18 10:39:17.915849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.182 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71808 00:10:52.752 [2024-11-18 10:39:18.333660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.692 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.692 00:10:53.692 real 0m11.536s 00:10:53.693 user 0m18.075s 00:10:53.693 sys 0m2.129s 00:10:53.693 ************************************ 00:10:53.693 END TEST raid_state_function_test_sb 00:10:53.693 ************************************ 00:10:53.693 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.693 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.693 10:39:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:53.693 10:39:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.693 10:39:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.693 10:39:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.693 ************************************ 00:10:53.693 START TEST raid_superblock_test 00:10:53.693 ************************************ 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:53.693 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72476 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72476 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72476 ']' 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.953 10:39:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.953 [2024-11-18 10:39:19.665578] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:53.953 [2024-11-18 10:39:19.665697] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72476 ] 00:10:54.213 [2024-11-18 10:39:19.845760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.213 [2024-11-18 10:39:19.978420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.473 [2024-11-18 10:39:20.213746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.473 [2024-11-18 10:39:20.213808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.734 malloc1 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.734 [2024-11-18 10:39:20.554189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.734 [2024-11-18 10:39:20.554347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.734 [2024-11-18 10:39:20.554392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:54.734 [2024-11-18 10:39:20.554422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.734 [2024-11-18 10:39:20.556891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.734 [2024-11-18 10:39:20.556971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.734 pt1 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.734 malloc2 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.734 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.039 [2024-11-18 10:39:20.617787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:55.039 [2024-11-18 10:39:20.617894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.039 [2024-11-18 10:39:20.617938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:55.039 [2024-11-18 10:39:20.617964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.039 [2024-11-18 10:39:20.620222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.039 [2024-11-18 10:39:20.620290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:55.039 pt2 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.039 malloc3 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.039 [2024-11-18 10:39:20.690152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:55.039 [2024-11-18 10:39:20.690273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.039 [2024-11-18 10:39:20.690311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:55.039 [2024-11-18 10:39:20.690340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.039 [2024-11-18 10:39:20.692670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.039 [2024-11-18 10:39:20.692737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:55.039 pt3 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.039 malloc4 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.039 [2024-11-18 10:39:20.755089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:55.039 [2024-11-18 10:39:20.755195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.039 [2024-11-18 10:39:20.755231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:55.039 [2024-11-18 10:39:20.755259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.039 [2024-11-18 10:39:20.757566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.039 [2024-11-18 10:39:20.757633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:55.039 pt4 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.039 [2024-11-18 10:39:20.767111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.039 [2024-11-18 10:39:20.769159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.039 [2024-11-18 10:39:20.769236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:55.039 [2024-11-18 10:39:20.769298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:55.039 [2024-11-18 10:39:20.769483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:55.039 [2024-11-18 10:39:20.769499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.039 [2024-11-18 10:39:20.769743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.039 [2024-11-18 10:39:20.769918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:55.039 [2024-11-18 10:39:20.769932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:55.039 [2024-11-18 10:39:20.770067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.039 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.040 "name": "raid_bdev1", 00:10:55.040 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:55.040 "strip_size_kb": 64, 00:10:55.040 "state": "online", 00:10:55.040 "raid_level": "concat", 00:10:55.040 "superblock": true, 00:10:55.040 "num_base_bdevs": 4, 00:10:55.040 "num_base_bdevs_discovered": 4, 00:10:55.040 "num_base_bdevs_operational": 4, 00:10:55.040 "base_bdevs_list": [ 00:10:55.040 { 00:10:55.040 "name": "pt1", 00:10:55.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.040 "is_configured": true, 00:10:55.040 "data_offset": 2048, 00:10:55.040 "data_size": 63488 00:10:55.040 }, 00:10:55.040 { 00:10:55.040 "name": "pt2", 00:10:55.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.040 "is_configured": true, 00:10:55.040 "data_offset": 2048, 00:10:55.040 "data_size": 63488 00:10:55.040 }, 00:10:55.040 { 00:10:55.040 "name": "pt3", 00:10:55.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.040 "is_configured": true, 00:10:55.040 "data_offset": 2048, 00:10:55.040 "data_size": 63488 00:10:55.040 }, 00:10:55.040 { 00:10:55.040 "name": "pt4", 00:10:55.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:55.040 "is_configured": true, 00:10:55.040 "data_offset": 2048, 00:10:55.040 "data_size": 63488 00:10:55.040 } 00:10:55.040 ] 00:10:55.040 }' 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.040 10:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.609 [2024-11-18 10:39:21.266571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.609 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.609 "name": "raid_bdev1", 00:10:55.609 "aliases": [ 00:10:55.609 "04311885-3003-4b68-9e80-ec4aa0ad90be" 00:10:55.609 ], 00:10:55.609 "product_name": "Raid Volume", 00:10:55.609 "block_size": 512, 00:10:55.609 "num_blocks": 253952, 00:10:55.609 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:55.609 "assigned_rate_limits": { 00:10:55.609 "rw_ios_per_sec": 0, 00:10:55.609 "rw_mbytes_per_sec": 0, 00:10:55.609 "r_mbytes_per_sec": 0, 00:10:55.609 "w_mbytes_per_sec": 0 00:10:55.609 }, 00:10:55.609 "claimed": false, 00:10:55.609 "zoned": false, 00:10:55.609 "supported_io_types": { 00:10:55.609 "read": true, 00:10:55.609 "write": true, 00:10:55.609 "unmap": true, 00:10:55.609 "flush": true, 00:10:55.609 "reset": true, 00:10:55.609 "nvme_admin": false, 00:10:55.609 "nvme_io": false, 00:10:55.609 "nvme_io_md": false, 00:10:55.609 "write_zeroes": true, 00:10:55.609 "zcopy": false, 00:10:55.609 "get_zone_info": false, 00:10:55.610 "zone_management": false, 00:10:55.610 "zone_append": false, 00:10:55.610 "compare": false, 00:10:55.610 "compare_and_write": false, 00:10:55.610 "abort": false, 00:10:55.610 "seek_hole": false, 00:10:55.610 "seek_data": false, 00:10:55.610 "copy": false, 00:10:55.610 "nvme_iov_md": false 00:10:55.610 }, 00:10:55.610 "memory_domains": [ 00:10:55.610 { 00:10:55.610 "dma_device_id": "system", 00:10:55.610 "dma_device_type": 1 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.610 "dma_device_type": 2 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "system", 00:10:55.610 "dma_device_type": 1 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.610 "dma_device_type": 2 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "system", 00:10:55.610 "dma_device_type": 1 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.610 "dma_device_type": 2 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "system", 00:10:55.610 "dma_device_type": 1 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.610 "dma_device_type": 2 00:10:55.610 } 00:10:55.610 ], 00:10:55.610 "driver_specific": { 00:10:55.610 "raid": { 00:10:55.610 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:55.610 "strip_size_kb": 64, 00:10:55.610 "state": "online", 00:10:55.610 "raid_level": "concat", 00:10:55.610 "superblock": true, 00:10:55.610 "num_base_bdevs": 4, 00:10:55.610 "num_base_bdevs_discovered": 4, 00:10:55.610 "num_base_bdevs_operational": 4, 00:10:55.610 "base_bdevs_list": [ 00:10:55.610 { 00:10:55.610 "name": "pt1", 00:10:55.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.610 "is_configured": true, 00:10:55.610 "data_offset": 2048, 00:10:55.610 "data_size": 63488 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "name": "pt2", 00:10:55.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.610 "is_configured": true, 00:10:55.610 "data_offset": 2048, 00:10:55.610 "data_size": 63488 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "name": "pt3", 00:10:55.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.610 "is_configured": true, 00:10:55.610 "data_offset": 2048, 00:10:55.610 "data_size": 63488 00:10:55.610 }, 00:10:55.610 { 00:10:55.610 "name": "pt4", 00:10:55.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:55.610 "is_configured": true, 00:10:55.610 "data_offset": 2048, 00:10:55.610 "data_size": 63488 00:10:55.610 } 00:10:55.610 ] 00:10:55.610 } 00:10:55.610 } 00:10:55.610 }' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:55.610 pt2 00:10:55.610 pt3 00:10:55.610 pt4' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.610 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 [2024-11-18 10:39:21.581959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=04311885-3003-4b68-9e80-ec4aa0ad90be 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 04311885-3003-4b68-9e80-ec4aa0ad90be ']' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 [2024-11-18 10:39:21.625612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.870 [2024-11-18 10:39:21.625672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.870 [2024-11-18 10:39:21.625769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.870 [2024-11-18 10:39:21.625857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.870 [2024-11-18 10:39:21.625913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:55.870 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.130 [2024-11-18 10:39:21.777372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:56.130 [2024-11-18 10:39:21.779459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:56.130 [2024-11-18 10:39:21.779505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:56.130 [2024-11-18 10:39:21.779538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:56.130 [2024-11-18 10:39:21.779589] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:56.130 [2024-11-18 10:39:21.779635] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:56.130 [2024-11-18 10:39:21.779653] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:56.130 [2024-11-18 10:39:21.779670] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:56.130 [2024-11-18 10:39:21.779683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.130 [2024-11-18 10:39:21.779693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:56.130 request: 00:10:56.130 { 00:10:56.130 "name": "raid_bdev1", 00:10:56.130 "raid_level": "concat", 00:10:56.130 "base_bdevs": [ 00:10:56.130 "malloc1", 00:10:56.130 "malloc2", 00:10:56.130 "malloc3", 00:10:56.130 "malloc4" 00:10:56.130 ], 00:10:56.130 "strip_size_kb": 64, 00:10:56.130 "superblock": false, 00:10:56.130 "method": "bdev_raid_create", 00:10:56.130 "req_id": 1 00:10:56.130 } 00:10:56.130 Got JSON-RPC error response 00:10:56.130 response: 00:10:56.130 { 00:10:56.130 "code": -17, 00:10:56.130 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:56.130 } 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.130 [2024-11-18 10:39:21.841245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.130 [2024-11-18 10:39:21.841339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.130 [2024-11-18 10:39:21.841371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:56.130 [2024-11-18 10:39:21.841401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.130 [2024-11-18 10:39:21.843766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.130 [2024-11-18 10:39:21.843851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.130 [2024-11-18 10:39:21.843939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:56.130 [2024-11-18 10:39:21.844023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.130 pt1 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.130 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.130 "name": "raid_bdev1", 00:10:56.130 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:56.130 "strip_size_kb": 64, 00:10:56.130 "state": "configuring", 00:10:56.130 "raid_level": "concat", 00:10:56.130 "superblock": true, 00:10:56.130 "num_base_bdevs": 4, 00:10:56.130 "num_base_bdevs_discovered": 1, 00:10:56.130 "num_base_bdevs_operational": 4, 00:10:56.130 "base_bdevs_list": [ 00:10:56.130 { 00:10:56.130 "name": "pt1", 00:10:56.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.130 "is_configured": true, 00:10:56.130 "data_offset": 2048, 00:10:56.130 "data_size": 63488 00:10:56.130 }, 00:10:56.130 { 00:10:56.130 "name": null, 00:10:56.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.130 "is_configured": false, 00:10:56.130 "data_offset": 2048, 00:10:56.130 "data_size": 63488 00:10:56.130 }, 00:10:56.130 { 00:10:56.130 "name": null, 00:10:56.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.130 "is_configured": false, 00:10:56.131 "data_offset": 2048, 00:10:56.131 "data_size": 63488 00:10:56.131 }, 00:10:56.131 { 00:10:56.131 "name": null, 00:10:56.131 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.131 "is_configured": false, 00:10:56.131 "data_offset": 2048, 00:10:56.131 "data_size": 63488 00:10:56.131 } 00:10:56.131 ] 00:10:56.131 }' 00:10:56.131 10:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.131 10:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.390 [2024-11-18 10:39:22.260514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.390 [2024-11-18 10:39:22.260619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.390 [2024-11-18 10:39:22.260640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:56.390 [2024-11-18 10:39:22.260652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.390 [2024-11-18 10:39:22.261061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.390 [2024-11-18 10:39:22.261080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.390 [2024-11-18 10:39:22.261146] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:56.390 [2024-11-18 10:39:22.261167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.390 pt2 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.390 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.390 [2024-11-18 10:39:22.268522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.649 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.649 "name": "raid_bdev1", 00:10:56.649 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:56.649 "strip_size_kb": 64, 00:10:56.649 "state": "configuring", 00:10:56.649 "raid_level": "concat", 00:10:56.649 "superblock": true, 00:10:56.649 "num_base_bdevs": 4, 00:10:56.649 "num_base_bdevs_discovered": 1, 00:10:56.649 "num_base_bdevs_operational": 4, 00:10:56.649 "base_bdevs_list": [ 00:10:56.649 { 00:10:56.649 "name": "pt1", 00:10:56.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.649 "is_configured": true, 00:10:56.649 "data_offset": 2048, 00:10:56.649 "data_size": 63488 00:10:56.649 }, 00:10:56.649 { 00:10:56.649 "name": null, 00:10:56.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.649 "is_configured": false, 00:10:56.649 "data_offset": 0, 00:10:56.649 "data_size": 63488 00:10:56.649 }, 00:10:56.649 { 00:10:56.649 "name": null, 00:10:56.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.649 "is_configured": false, 00:10:56.649 "data_offset": 2048, 00:10:56.649 "data_size": 63488 00:10:56.649 }, 00:10:56.649 { 00:10:56.649 "name": null, 00:10:56.650 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.650 "is_configured": false, 00:10:56.650 "data_offset": 2048, 00:10:56.650 "data_size": 63488 00:10:56.650 } 00:10:56.650 ] 00:10:56.650 }' 00:10:56.650 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.650 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 [2024-11-18 10:39:22.759658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.910 [2024-11-18 10:39:22.759710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.910 [2024-11-18 10:39:22.759730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:56.910 [2024-11-18 10:39:22.759740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.910 [2024-11-18 10:39:22.760192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.910 [2024-11-18 10:39:22.760219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.910 [2024-11-18 10:39:22.760297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:56.910 [2024-11-18 10:39:22.760329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.910 pt2 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 [2024-11-18 10:39:22.771626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.910 [2024-11-18 10:39:22.771671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.910 [2024-11-18 10:39:22.771695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:56.910 [2024-11-18 10:39:22.771706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.910 [2024-11-18 10:39:22.772066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.910 [2024-11-18 10:39:22.772081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.910 [2024-11-18 10:39:22.772146] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:56.910 [2024-11-18 10:39:22.772161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.910 pt3 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 [2024-11-18 10:39:22.783586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:56.910 [2024-11-18 10:39:22.783630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.910 [2024-11-18 10:39:22.783648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:56.910 [2024-11-18 10:39:22.783656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.910 [2024-11-18 10:39:22.784002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.910 [2024-11-18 10:39:22.784017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:56.910 [2024-11-18 10:39:22.784074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:56.910 [2024-11-18 10:39:22.784089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:56.910 [2024-11-18 10:39:22.784239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.910 [2024-11-18 10:39:22.784248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.910 [2024-11-18 10:39:22.784505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:56.910 [2024-11-18 10:39:22.784697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.910 [2024-11-18 10:39:22.784715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:56.910 [2024-11-18 10:39:22.784835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.910 pt4 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.910 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.170 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.170 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.170 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.170 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.170 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.170 "name": "raid_bdev1", 00:10:57.170 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:57.170 "strip_size_kb": 64, 00:10:57.170 "state": "online", 00:10:57.170 "raid_level": "concat", 00:10:57.170 "superblock": true, 00:10:57.170 "num_base_bdevs": 4, 00:10:57.171 "num_base_bdevs_discovered": 4, 00:10:57.171 "num_base_bdevs_operational": 4, 00:10:57.171 "base_bdevs_list": [ 00:10:57.171 { 00:10:57.171 "name": "pt1", 00:10:57.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.171 "is_configured": true, 00:10:57.171 "data_offset": 2048, 00:10:57.171 "data_size": 63488 00:10:57.171 }, 00:10:57.171 { 00:10:57.171 "name": "pt2", 00:10:57.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.171 "is_configured": true, 00:10:57.171 "data_offset": 2048, 00:10:57.171 "data_size": 63488 00:10:57.171 }, 00:10:57.171 { 00:10:57.171 "name": "pt3", 00:10:57.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.171 "is_configured": true, 00:10:57.171 "data_offset": 2048, 00:10:57.171 "data_size": 63488 00:10:57.171 }, 00:10:57.171 { 00:10:57.171 "name": "pt4", 00:10:57.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.171 "is_configured": true, 00:10:57.171 "data_offset": 2048, 00:10:57.171 "data_size": 63488 00:10:57.171 } 00:10:57.171 ] 00:10:57.171 }' 00:10:57.171 10:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.171 10:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.430 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.431 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.431 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.431 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.431 [2024-11-18 10:39:23.279105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.431 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.431 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.431 "name": "raid_bdev1", 00:10:57.431 "aliases": [ 00:10:57.431 "04311885-3003-4b68-9e80-ec4aa0ad90be" 00:10:57.431 ], 00:10:57.431 "product_name": "Raid Volume", 00:10:57.431 "block_size": 512, 00:10:57.431 "num_blocks": 253952, 00:10:57.431 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:57.431 "assigned_rate_limits": { 00:10:57.431 "rw_ios_per_sec": 0, 00:10:57.431 "rw_mbytes_per_sec": 0, 00:10:57.431 "r_mbytes_per_sec": 0, 00:10:57.431 "w_mbytes_per_sec": 0 00:10:57.431 }, 00:10:57.431 "claimed": false, 00:10:57.431 "zoned": false, 00:10:57.431 "supported_io_types": { 00:10:57.431 "read": true, 00:10:57.431 "write": true, 00:10:57.431 "unmap": true, 00:10:57.431 "flush": true, 00:10:57.431 "reset": true, 00:10:57.431 "nvme_admin": false, 00:10:57.431 "nvme_io": false, 00:10:57.431 "nvme_io_md": false, 00:10:57.431 "write_zeroes": true, 00:10:57.431 "zcopy": false, 00:10:57.431 "get_zone_info": false, 00:10:57.431 "zone_management": false, 00:10:57.431 "zone_append": false, 00:10:57.431 "compare": false, 00:10:57.431 "compare_and_write": false, 00:10:57.431 "abort": false, 00:10:57.431 "seek_hole": false, 00:10:57.431 "seek_data": false, 00:10:57.431 "copy": false, 00:10:57.431 "nvme_iov_md": false 00:10:57.431 }, 00:10:57.431 "memory_domains": [ 00:10:57.431 { 00:10:57.431 "dma_device_id": "system", 00:10:57.431 "dma_device_type": 1 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.431 "dma_device_type": 2 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "system", 00:10:57.431 "dma_device_type": 1 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.431 "dma_device_type": 2 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "system", 00:10:57.431 "dma_device_type": 1 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.431 "dma_device_type": 2 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "system", 00:10:57.431 "dma_device_type": 1 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.431 "dma_device_type": 2 00:10:57.431 } 00:10:57.431 ], 00:10:57.431 "driver_specific": { 00:10:57.431 "raid": { 00:10:57.431 "uuid": "04311885-3003-4b68-9e80-ec4aa0ad90be", 00:10:57.431 "strip_size_kb": 64, 00:10:57.431 "state": "online", 00:10:57.431 "raid_level": "concat", 00:10:57.431 "superblock": true, 00:10:57.431 "num_base_bdevs": 4, 00:10:57.431 "num_base_bdevs_discovered": 4, 00:10:57.431 "num_base_bdevs_operational": 4, 00:10:57.431 "base_bdevs_list": [ 00:10:57.431 { 00:10:57.431 "name": "pt1", 00:10:57.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.431 "is_configured": true, 00:10:57.431 "data_offset": 2048, 00:10:57.431 "data_size": 63488 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "name": "pt2", 00:10:57.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.431 "is_configured": true, 00:10:57.431 "data_offset": 2048, 00:10:57.431 "data_size": 63488 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "name": "pt3", 00:10:57.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.431 "is_configured": true, 00:10:57.431 "data_offset": 2048, 00:10:57.431 "data_size": 63488 00:10:57.431 }, 00:10:57.431 { 00:10:57.431 "name": "pt4", 00:10:57.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.431 "is_configured": true, 00:10:57.431 "data_offset": 2048, 00:10:57.431 "data_size": 63488 00:10:57.431 } 00:10:57.431 ] 00:10:57.431 } 00:10:57.431 } 00:10:57.431 }' 00:10:57.431 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.691 pt2 00:10:57.691 pt3 00:10:57.691 pt4' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.691 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.952 [2024-11-18 10:39:23.610461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 04311885-3003-4b68-9e80-ec4aa0ad90be '!=' 04311885-3003-4b68-9e80-ec4aa0ad90be ']' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72476 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72476 ']' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72476 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72476 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.952 killing process with pid 72476 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72476' 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72476 00:10:57.952 [2024-11-18 10:39:23.678560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.952 [2024-11-18 10:39:23.678682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.952 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72476 00:10:57.952 [2024-11-18 10:39:23.678782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.952 [2024-11-18 10:39:23.678793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:58.213 [2024-11-18 10:39:24.095304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.592 ************************************ 00:10:59.592 END TEST raid_superblock_test 00:10:59.592 ************************************ 00:10:59.592 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:59.592 00:10:59.592 real 0m5.683s 00:10:59.592 user 0m8.030s 00:10:59.592 sys 0m1.067s 00:10:59.592 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.592 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.592 10:39:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:59.592 10:39:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:59.592 10:39:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.592 10:39:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.592 ************************************ 00:10:59.592 START TEST raid_read_error_test 00:10:59.592 ************************************ 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ob0LYsGCje 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72748 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72748 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72748 ']' 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.592 10:39:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.592 [2024-11-18 10:39:25.431970] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:59.592 [2024-11-18 10:39:25.432146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:10:59.852 [2024-11-18 10:39:25.610251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.112 [2024-11-18 10:39:25.742245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.112 [2024-11-18 10:39:25.971003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.112 [2024-11-18 10:39:25.971128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.372 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.372 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:00.372 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.372 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:00.372 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.372 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 BaseBdev1_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 true 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 [2024-11-18 10:39:26.309575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:00.633 [2024-11-18 10:39:26.309636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.633 [2024-11-18 10:39:26.309658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:00.633 [2024-11-18 10:39:26.309669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.633 [2024-11-18 10:39:26.312057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.633 [2024-11-18 10:39:26.312096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.633 BaseBdev1 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 BaseBdev2_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 true 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 [2024-11-18 10:39:26.381470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.633 [2024-11-18 10:39:26.381534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.633 [2024-11-18 10:39:26.381550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:00.633 [2024-11-18 10:39:26.381562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.633 [2024-11-18 10:39:26.383880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.633 [2024-11-18 10:39:26.383916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.633 BaseBdev2 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 BaseBdev3_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 true 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.633 [2024-11-18 10:39:26.468695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:00.633 [2024-11-18 10:39:26.468815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.633 [2024-11-18 10:39:26.468849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:00.633 [2024-11-18 10:39:26.468879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.633 [2024-11-18 10:39:26.471229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.633 [2024-11-18 10:39:26.471265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:00.633 BaseBdev3 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.633 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.894 BaseBdev4_malloc 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.894 true 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.894 [2024-11-18 10:39:26.541383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:00.894 [2024-11-18 10:39:26.541511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.894 [2024-11-18 10:39:26.541545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:00.894 [2024-11-18 10:39:26.541577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.894 [2024-11-18 10:39:26.543992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.894 [2024-11-18 10:39:26.544070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:00.894 BaseBdev4 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.894 [2024-11-18 10:39:26.553429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.894 [2024-11-18 10:39:26.555524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.894 [2024-11-18 10:39:26.555637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.894 [2024-11-18 10:39:26.555737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.894 [2024-11-18 10:39:26.556002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:00.894 [2024-11-18 10:39:26.556051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.894 [2024-11-18 10:39:26.556339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:00.894 [2024-11-18 10:39:26.556537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:00.894 [2024-11-18 10:39:26.556578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:00.894 [2024-11-18 10:39:26.556739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.894 "name": "raid_bdev1", 00:11:00.894 "uuid": "6f58579b-8a3d-4026-9baf-18615e55e2ef", 00:11:00.894 "strip_size_kb": 64, 00:11:00.894 "state": "online", 00:11:00.894 "raid_level": "concat", 00:11:00.894 "superblock": true, 00:11:00.894 "num_base_bdevs": 4, 00:11:00.894 "num_base_bdevs_discovered": 4, 00:11:00.894 "num_base_bdevs_operational": 4, 00:11:00.894 "base_bdevs_list": [ 00:11:00.894 { 00:11:00.894 "name": "BaseBdev1", 00:11:00.894 "uuid": "d104971e-501f-5a05-8bd9-36ed6e05b3fc", 00:11:00.894 "is_configured": true, 00:11:00.894 "data_offset": 2048, 00:11:00.894 "data_size": 63488 00:11:00.894 }, 00:11:00.894 { 00:11:00.894 "name": "BaseBdev2", 00:11:00.894 "uuid": "ba7d7564-5667-5c6e-859b-0a96ddaf7741", 00:11:00.894 "is_configured": true, 00:11:00.894 "data_offset": 2048, 00:11:00.894 "data_size": 63488 00:11:00.894 }, 00:11:00.894 { 00:11:00.894 "name": "BaseBdev3", 00:11:00.894 "uuid": "5673dae5-df78-5a58-8b35-589c02704b60", 00:11:00.894 "is_configured": true, 00:11:00.894 "data_offset": 2048, 00:11:00.894 "data_size": 63488 00:11:00.894 }, 00:11:00.894 { 00:11:00.894 "name": "BaseBdev4", 00:11:00.894 "uuid": "f5c90e71-efc0-5c89-a522-4fad90441ef1", 00:11:00.894 "is_configured": true, 00:11:00.894 "data_offset": 2048, 00:11:00.894 "data_size": 63488 00:11:00.894 } 00:11:00.894 ] 00:11:00.894 }' 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.894 10:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.153 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:01.153 10:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:01.413 [2024-11-18 10:39:27.077936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:02.352 10:39:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:02.352 10:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.352 10:39:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.352 "name": "raid_bdev1", 00:11:02.352 "uuid": "6f58579b-8a3d-4026-9baf-18615e55e2ef", 00:11:02.352 "strip_size_kb": 64, 00:11:02.352 "state": "online", 00:11:02.352 "raid_level": "concat", 00:11:02.352 "superblock": true, 00:11:02.352 "num_base_bdevs": 4, 00:11:02.352 "num_base_bdevs_discovered": 4, 00:11:02.352 "num_base_bdevs_operational": 4, 00:11:02.352 "base_bdevs_list": [ 00:11:02.352 { 00:11:02.352 "name": "BaseBdev1", 00:11:02.352 "uuid": "d104971e-501f-5a05-8bd9-36ed6e05b3fc", 00:11:02.352 "is_configured": true, 00:11:02.352 "data_offset": 2048, 00:11:02.352 "data_size": 63488 00:11:02.352 }, 00:11:02.352 { 00:11:02.352 "name": "BaseBdev2", 00:11:02.352 "uuid": "ba7d7564-5667-5c6e-859b-0a96ddaf7741", 00:11:02.352 "is_configured": true, 00:11:02.352 "data_offset": 2048, 00:11:02.352 "data_size": 63488 00:11:02.352 }, 00:11:02.352 { 00:11:02.352 "name": "BaseBdev3", 00:11:02.352 "uuid": "5673dae5-df78-5a58-8b35-589c02704b60", 00:11:02.352 "is_configured": true, 00:11:02.352 "data_offset": 2048, 00:11:02.352 "data_size": 63488 00:11:02.352 }, 00:11:02.352 { 00:11:02.352 "name": "BaseBdev4", 00:11:02.352 "uuid": "f5c90e71-efc0-5c89-a522-4fad90441ef1", 00:11:02.352 "is_configured": true, 00:11:02.352 "data_offset": 2048, 00:11:02.352 "data_size": 63488 00:11:02.352 } 00:11:02.352 ] 00:11:02.352 }' 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.352 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.612 [2024-11-18 10:39:28.458068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.612 [2024-11-18 10:39:28.458237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.612 [2024-11-18 10:39:28.460930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.612 [2024-11-18 10:39:28.461027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.612 [2024-11-18 10:39:28.461101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.612 [2024-11-18 10:39:28.461166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.612 { 00:11:02.612 "results": [ 00:11:02.612 { 00:11:02.612 "job": "raid_bdev1", 00:11:02.612 "core_mask": "0x1", 00:11:02.612 "workload": "randrw", 00:11:02.612 "percentage": 50, 00:11:02.612 "status": "finished", 00:11:02.612 "queue_depth": 1, 00:11:02.612 "io_size": 131072, 00:11:02.612 "runtime": 1.380858, 00:11:02.612 "iops": 14259.974595505113, 00:11:02.612 "mibps": 1782.4968244381391, 00:11:02.612 "io_failed": 1, 00:11:02.612 "io_timeout": 0, 00:11:02.612 "avg_latency_us": 98.82495833211368, 00:11:02.612 "min_latency_us": 25.2646288209607, 00:11:02.612 "max_latency_us": 1366.5257641921398 00:11:02.612 } 00:11:02.612 ], 00:11:02.612 "core_count": 1 00:11:02.612 } 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72748 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72748 ']' 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72748 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.612 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72748 00:11:02.872 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.872 killing process with pid 72748 00:11:02.872 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.872 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72748' 00:11:02.872 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72748 00:11:02.872 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72748 00:11:02.872 [2024-11-18 10:39:28.501290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.132 [2024-11-18 10:39:28.845134] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ob0LYsGCje 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:04.541 00:11:04.541 real 0m4.743s 00:11:04.541 user 0m5.445s 00:11:04.541 sys 0m0.699s 00:11:04.541 ************************************ 00:11:04.541 END TEST raid_read_error_test 00:11:04.541 ************************************ 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.541 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.541 10:39:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:04.541 10:39:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:04.541 10:39:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.541 10:39:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.541 ************************************ 00:11:04.541 START TEST raid_write_error_test 00:11:04.541 ************************************ 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:04.541 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7bP4zuCqqE 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72894 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72894 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72894 ']' 00:11:04.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.542 10:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.542 [2024-11-18 10:39:30.249880] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:04.542 [2024-11-18 10:39:30.249993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72894 ] 00:11:04.542 [2024-11-18 10:39:30.424352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.801 [2024-11-18 10:39:30.552398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.061 [2024-11-18 10:39:30.784882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.061 [2024-11-18 10:39:30.784918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.322 BaseBdev1_malloc 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.322 true 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.322 [2024-11-18 10:39:31.131826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:05.322 [2024-11-18 10:39:31.131891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.322 [2024-11-18 10:39:31.131911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:05.322 [2024-11-18 10:39:31.131923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.322 [2024-11-18 10:39:31.134322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.322 [2024-11-18 10:39:31.134445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:05.322 BaseBdev1 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.322 BaseBdev2_malloc 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.322 true 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.322 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.322 [2024-11-18 10:39:31.203643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:05.322 [2024-11-18 10:39:31.203698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.322 [2024-11-18 10:39:31.203714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:05.322 [2024-11-18 10:39:31.203726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.583 [2024-11-18 10:39:31.206105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.583 [2024-11-18 10:39:31.206148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.583 BaseBdev2 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 BaseBdev3_malloc 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 true 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 [2024-11-18 10:39:31.307543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:05.583 [2024-11-18 10:39:31.307593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.583 [2024-11-18 10:39:31.307609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:05.583 [2024-11-18 10:39:31.307620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.583 [2024-11-18 10:39:31.309920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.583 [2024-11-18 10:39:31.309956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:05.583 BaseBdev3 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 BaseBdev4_malloc 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 true 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 [2024-11-18 10:39:31.380387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:05.583 [2024-11-18 10:39:31.380441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.583 [2024-11-18 10:39:31.380458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:05.583 [2024-11-18 10:39:31.380468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.583 [2024-11-18 10:39:31.382785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.583 [2024-11-18 10:39:31.382915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:05.583 BaseBdev4 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 [2024-11-18 10:39:31.392435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.583 [2024-11-18 10:39:31.394534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.583 [2024-11-18 10:39:31.394609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.583 [2024-11-18 10:39:31.394674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.583 [2024-11-18 10:39:31.394914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:05.583 [2024-11-18 10:39:31.394928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.583 [2024-11-18 10:39:31.395167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:05.583 [2024-11-18 10:39:31.395342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:05.583 [2024-11-18 10:39:31.395353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:05.583 [2024-11-18 10:39:31.395501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.583 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.584 "name": "raid_bdev1", 00:11:05.584 "uuid": "8e165e2d-c75d-4c04-80a8-a1f967666d9f", 00:11:05.584 "strip_size_kb": 64, 00:11:05.584 "state": "online", 00:11:05.584 "raid_level": "concat", 00:11:05.584 "superblock": true, 00:11:05.584 "num_base_bdevs": 4, 00:11:05.584 "num_base_bdevs_discovered": 4, 00:11:05.584 "num_base_bdevs_operational": 4, 00:11:05.584 "base_bdevs_list": [ 00:11:05.584 { 00:11:05.584 "name": "BaseBdev1", 00:11:05.584 "uuid": "36a845ba-040c-5373-a657-73ef7dedd2cb", 00:11:05.584 "is_configured": true, 00:11:05.584 "data_offset": 2048, 00:11:05.584 "data_size": 63488 00:11:05.584 }, 00:11:05.584 { 00:11:05.584 "name": "BaseBdev2", 00:11:05.584 "uuid": "3bb4bf9f-1451-5d39-9bb2-49301b0f54a8", 00:11:05.584 "is_configured": true, 00:11:05.584 "data_offset": 2048, 00:11:05.584 "data_size": 63488 00:11:05.584 }, 00:11:05.584 { 00:11:05.584 "name": "BaseBdev3", 00:11:05.584 "uuid": "b05fc67d-2078-56db-8c17-0c2070870ea9", 00:11:05.584 "is_configured": true, 00:11:05.584 "data_offset": 2048, 00:11:05.584 "data_size": 63488 00:11:05.584 }, 00:11:05.584 { 00:11:05.584 "name": "BaseBdev4", 00:11:05.584 "uuid": "9b934f3c-db77-5200-9914-7a5a30a80819", 00:11:05.584 "is_configured": true, 00:11:05.584 "data_offset": 2048, 00:11:05.584 "data_size": 63488 00:11:05.584 } 00:11:05.584 ] 00:11:05.584 }' 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.584 10:39:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.153 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:06.153 10:39:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:06.153 [2024-11-18 10:39:31.924962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.093 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.093 "name": "raid_bdev1", 00:11:07.093 "uuid": "8e165e2d-c75d-4c04-80a8-a1f967666d9f", 00:11:07.093 "strip_size_kb": 64, 00:11:07.094 "state": "online", 00:11:07.094 "raid_level": "concat", 00:11:07.094 "superblock": true, 00:11:07.094 "num_base_bdevs": 4, 00:11:07.094 "num_base_bdevs_discovered": 4, 00:11:07.094 "num_base_bdevs_operational": 4, 00:11:07.094 "base_bdevs_list": [ 00:11:07.094 { 00:11:07.094 "name": "BaseBdev1", 00:11:07.094 "uuid": "36a845ba-040c-5373-a657-73ef7dedd2cb", 00:11:07.094 "is_configured": true, 00:11:07.094 "data_offset": 2048, 00:11:07.094 "data_size": 63488 00:11:07.094 }, 00:11:07.094 { 00:11:07.094 "name": "BaseBdev2", 00:11:07.094 "uuid": "3bb4bf9f-1451-5d39-9bb2-49301b0f54a8", 00:11:07.094 "is_configured": true, 00:11:07.094 "data_offset": 2048, 00:11:07.094 "data_size": 63488 00:11:07.094 }, 00:11:07.094 { 00:11:07.094 "name": "BaseBdev3", 00:11:07.094 "uuid": "b05fc67d-2078-56db-8c17-0c2070870ea9", 00:11:07.094 "is_configured": true, 00:11:07.094 "data_offset": 2048, 00:11:07.094 "data_size": 63488 00:11:07.094 }, 00:11:07.094 { 00:11:07.094 "name": "BaseBdev4", 00:11:07.094 "uuid": "9b934f3c-db77-5200-9914-7a5a30a80819", 00:11:07.094 "is_configured": true, 00:11:07.094 "data_offset": 2048, 00:11:07.094 "data_size": 63488 00:11:07.094 } 00:11:07.094 ] 00:11:07.094 }' 00:11:07.094 10:39:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.094 10:39:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.664 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.664 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.664 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.664 [2024-11-18 10:39:33.253312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.664 [2024-11-18 10:39:33.253458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.664 [2024-11-18 10:39:33.255991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.664 [2024-11-18 10:39:33.256098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.664 [2024-11-18 10:39:33.256165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.664 [2024-11-18 10:39:33.256235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:07.664 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.664 { 00:11:07.664 "results": [ 00:11:07.664 { 00:11:07.664 "job": "raid_bdev1", 00:11:07.664 "core_mask": "0x1", 00:11:07.664 "workload": "randrw", 00:11:07.664 "percentage": 50, 00:11:07.664 "status": "finished", 00:11:07.664 "queue_depth": 1, 00:11:07.664 "io_size": 131072, 00:11:07.664 "runtime": 1.328913, 00:11:07.664 "iops": 14377.163892594925, 00:11:07.664 "mibps": 1797.1454865743656, 00:11:07.664 "io_failed": 1, 00:11:07.665 "io_timeout": 0, 00:11:07.665 "avg_latency_us": 98.11746900870597, 00:11:07.665 "min_latency_us": 24.482096069868994, 00:11:07.665 "max_latency_us": 1330.7528384279476 00:11:07.665 } 00:11:07.665 ], 00:11:07.665 "core_count": 1 00:11:07.665 } 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72894 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72894 ']' 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72894 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72894 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.665 killing process with pid 72894 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72894' 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72894 00:11:07.665 [2024-11-18 10:39:33.298549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.665 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72894 00:11:07.925 [2024-11-18 10:39:33.641666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7bP4zuCqqE 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:09.307 ************************************ 00:11:09.307 END TEST raid_write_error_test 00:11:09.307 ************************************ 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:09.307 00:11:09.307 real 0m4.727s 00:11:09.307 user 0m5.397s 00:11:09.307 sys 0m0.692s 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.307 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.307 10:39:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:09.307 10:39:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:09.307 10:39:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:09.307 10:39:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.307 10:39:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.307 ************************************ 00:11:09.307 START TEST raid_state_function_test 00:11:09.307 ************************************ 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:09.307 Process raid pid: 73032 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73032 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73032' 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73032 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73032 ']' 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.307 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.308 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.308 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 [2024-11-18 10:39:35.044540] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:09.308 [2024-11-18 10:39:35.044658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.568 [2024-11-18 10:39:35.220406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.568 [2024-11-18 10:39:35.349252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.827 [2024-11-18 10:39:35.582970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.827 [2024-11-18 10:39:35.583102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.087 [2024-11-18 10:39:35.873008] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.087 [2024-11-18 10:39:35.873068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.087 [2024-11-18 10:39:35.873079] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.087 [2024-11-18 10:39:35.873088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.087 [2024-11-18 10:39:35.873095] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.087 [2024-11-18 10:39:35.873104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.087 [2024-11-18 10:39:35.873109] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.087 [2024-11-18 10:39:35.873118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.087 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.088 "name": "Existed_Raid", 00:11:10.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.088 "strip_size_kb": 0, 00:11:10.088 "state": "configuring", 00:11:10.088 "raid_level": "raid1", 00:11:10.088 "superblock": false, 00:11:10.088 "num_base_bdevs": 4, 00:11:10.088 "num_base_bdevs_discovered": 0, 00:11:10.088 "num_base_bdevs_operational": 4, 00:11:10.088 "base_bdevs_list": [ 00:11:10.088 { 00:11:10.088 "name": "BaseBdev1", 00:11:10.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.088 "is_configured": false, 00:11:10.088 "data_offset": 0, 00:11:10.088 "data_size": 0 00:11:10.088 }, 00:11:10.088 { 00:11:10.088 "name": "BaseBdev2", 00:11:10.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.088 "is_configured": false, 00:11:10.088 "data_offset": 0, 00:11:10.088 "data_size": 0 00:11:10.088 }, 00:11:10.088 { 00:11:10.088 "name": "BaseBdev3", 00:11:10.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.088 "is_configured": false, 00:11:10.088 "data_offset": 0, 00:11:10.088 "data_size": 0 00:11:10.088 }, 00:11:10.088 { 00:11:10.088 "name": "BaseBdev4", 00:11:10.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.088 "is_configured": false, 00:11:10.088 "data_offset": 0, 00:11:10.088 "data_size": 0 00:11:10.088 } 00:11:10.088 ] 00:11:10.088 }' 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.088 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 [2024-11-18 10:39:36.320208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.658 [2024-11-18 10:39:36.320303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 [2024-11-18 10:39:36.328197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.658 [2024-11-18 10:39:36.328271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.658 [2024-11-18 10:39:36.328298] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.658 [2024-11-18 10:39:36.328321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.658 [2024-11-18 10:39:36.328339] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.658 [2024-11-18 10:39:36.328370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.658 [2024-11-18 10:39:36.328387] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.658 [2024-11-18 10:39:36.328408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 [2024-11-18 10:39:36.376138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.658 BaseBdev1 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 [ 00:11:10.658 { 00:11:10.658 "name": "BaseBdev1", 00:11:10.658 "aliases": [ 00:11:10.658 "aa4812b9-7dd2-4619-aa2c-982fa735492c" 00:11:10.658 ], 00:11:10.658 "product_name": "Malloc disk", 00:11:10.658 "block_size": 512, 00:11:10.658 "num_blocks": 65536, 00:11:10.658 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:10.658 "assigned_rate_limits": { 00:11:10.658 "rw_ios_per_sec": 0, 00:11:10.658 "rw_mbytes_per_sec": 0, 00:11:10.658 "r_mbytes_per_sec": 0, 00:11:10.658 "w_mbytes_per_sec": 0 00:11:10.658 }, 00:11:10.658 "claimed": true, 00:11:10.658 "claim_type": "exclusive_write", 00:11:10.658 "zoned": false, 00:11:10.658 "supported_io_types": { 00:11:10.658 "read": true, 00:11:10.658 "write": true, 00:11:10.658 "unmap": true, 00:11:10.658 "flush": true, 00:11:10.658 "reset": true, 00:11:10.658 "nvme_admin": false, 00:11:10.658 "nvme_io": false, 00:11:10.658 "nvme_io_md": false, 00:11:10.658 "write_zeroes": true, 00:11:10.658 "zcopy": true, 00:11:10.658 "get_zone_info": false, 00:11:10.658 "zone_management": false, 00:11:10.658 "zone_append": false, 00:11:10.658 "compare": false, 00:11:10.658 "compare_and_write": false, 00:11:10.658 "abort": true, 00:11:10.658 "seek_hole": false, 00:11:10.658 "seek_data": false, 00:11:10.658 "copy": true, 00:11:10.658 "nvme_iov_md": false 00:11:10.658 }, 00:11:10.658 "memory_domains": [ 00:11:10.658 { 00:11:10.658 "dma_device_id": "system", 00:11:10.658 "dma_device_type": 1 00:11:10.658 }, 00:11:10.658 { 00:11:10.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.658 "dma_device_type": 2 00:11:10.658 } 00:11:10.658 ], 00:11:10.658 "driver_specific": {} 00:11:10.658 } 00:11:10.658 ] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.658 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.658 "name": "Existed_Raid", 00:11:10.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.658 "strip_size_kb": 0, 00:11:10.658 "state": "configuring", 00:11:10.658 "raid_level": "raid1", 00:11:10.658 "superblock": false, 00:11:10.658 "num_base_bdevs": 4, 00:11:10.658 "num_base_bdevs_discovered": 1, 00:11:10.658 "num_base_bdevs_operational": 4, 00:11:10.658 "base_bdevs_list": [ 00:11:10.658 { 00:11:10.658 "name": "BaseBdev1", 00:11:10.658 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:10.658 "is_configured": true, 00:11:10.658 "data_offset": 0, 00:11:10.658 "data_size": 65536 00:11:10.658 }, 00:11:10.658 { 00:11:10.658 "name": "BaseBdev2", 00:11:10.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.658 "is_configured": false, 00:11:10.658 "data_offset": 0, 00:11:10.658 "data_size": 0 00:11:10.659 }, 00:11:10.659 { 00:11:10.659 "name": "BaseBdev3", 00:11:10.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.659 "is_configured": false, 00:11:10.659 "data_offset": 0, 00:11:10.659 "data_size": 0 00:11:10.659 }, 00:11:10.659 { 00:11:10.659 "name": "BaseBdev4", 00:11:10.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.659 "is_configured": false, 00:11:10.659 "data_offset": 0, 00:11:10.659 "data_size": 0 00:11:10.659 } 00:11:10.659 ] 00:11:10.659 }' 00:11:10.659 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.659 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 [2024-11-18 10:39:36.823360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.228 [2024-11-18 10:39:36.823414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 [2024-11-18 10:39:36.831404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.228 [2024-11-18 10:39:36.833438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.228 [2024-11-18 10:39:36.833479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.228 [2024-11-18 10:39:36.833489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.228 [2024-11-18 10:39:36.833500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.228 [2024-11-18 10:39:36.833507] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.228 [2024-11-18 10:39:36.833516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.228 "name": "Existed_Raid", 00:11:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.228 "strip_size_kb": 0, 00:11:11.228 "state": "configuring", 00:11:11.228 "raid_level": "raid1", 00:11:11.228 "superblock": false, 00:11:11.228 "num_base_bdevs": 4, 00:11:11.228 "num_base_bdevs_discovered": 1, 00:11:11.228 "num_base_bdevs_operational": 4, 00:11:11.228 "base_bdevs_list": [ 00:11:11.228 { 00:11:11.228 "name": "BaseBdev1", 00:11:11.228 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:11.228 "is_configured": true, 00:11:11.228 "data_offset": 0, 00:11:11.228 "data_size": 65536 00:11:11.228 }, 00:11:11.228 { 00:11:11.228 "name": "BaseBdev2", 00:11:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.228 "is_configured": false, 00:11:11.228 "data_offset": 0, 00:11:11.228 "data_size": 0 00:11:11.228 }, 00:11:11.228 { 00:11:11.228 "name": "BaseBdev3", 00:11:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.228 "is_configured": false, 00:11:11.228 "data_offset": 0, 00:11:11.228 "data_size": 0 00:11:11.228 }, 00:11:11.228 { 00:11:11.228 "name": "BaseBdev4", 00:11:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.228 "is_configured": false, 00:11:11.228 "data_offset": 0, 00:11:11.228 "data_size": 0 00:11:11.228 } 00:11:11.228 ] 00:11:11.228 }' 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.228 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.488 [2024-11-18 10:39:37.273700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.488 BaseBdev2 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:11.488 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.489 [ 00:11:11.489 { 00:11:11.489 "name": "BaseBdev2", 00:11:11.489 "aliases": [ 00:11:11.489 "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197" 00:11:11.489 ], 00:11:11.489 "product_name": "Malloc disk", 00:11:11.489 "block_size": 512, 00:11:11.489 "num_blocks": 65536, 00:11:11.489 "uuid": "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197", 00:11:11.489 "assigned_rate_limits": { 00:11:11.489 "rw_ios_per_sec": 0, 00:11:11.489 "rw_mbytes_per_sec": 0, 00:11:11.489 "r_mbytes_per_sec": 0, 00:11:11.489 "w_mbytes_per_sec": 0 00:11:11.489 }, 00:11:11.489 "claimed": true, 00:11:11.489 "claim_type": "exclusive_write", 00:11:11.489 "zoned": false, 00:11:11.489 "supported_io_types": { 00:11:11.489 "read": true, 00:11:11.489 "write": true, 00:11:11.489 "unmap": true, 00:11:11.489 "flush": true, 00:11:11.489 "reset": true, 00:11:11.489 "nvme_admin": false, 00:11:11.489 "nvme_io": false, 00:11:11.489 "nvme_io_md": false, 00:11:11.489 "write_zeroes": true, 00:11:11.489 "zcopy": true, 00:11:11.489 "get_zone_info": false, 00:11:11.489 "zone_management": false, 00:11:11.489 "zone_append": false, 00:11:11.489 "compare": false, 00:11:11.489 "compare_and_write": false, 00:11:11.489 "abort": true, 00:11:11.489 "seek_hole": false, 00:11:11.489 "seek_data": false, 00:11:11.489 "copy": true, 00:11:11.489 "nvme_iov_md": false 00:11:11.489 }, 00:11:11.489 "memory_domains": [ 00:11:11.489 { 00:11:11.489 "dma_device_id": "system", 00:11:11.489 "dma_device_type": 1 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.489 "dma_device_type": 2 00:11:11.489 } 00:11:11.489 ], 00:11:11.489 "driver_specific": {} 00:11:11.489 } 00:11:11.489 ] 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.489 "name": "Existed_Raid", 00:11:11.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.489 "strip_size_kb": 0, 00:11:11.489 "state": "configuring", 00:11:11.489 "raid_level": "raid1", 00:11:11.489 "superblock": false, 00:11:11.489 "num_base_bdevs": 4, 00:11:11.489 "num_base_bdevs_discovered": 2, 00:11:11.489 "num_base_bdevs_operational": 4, 00:11:11.489 "base_bdevs_list": [ 00:11:11.489 { 00:11:11.489 "name": "BaseBdev1", 00:11:11.489 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:11.489 "is_configured": true, 00:11:11.489 "data_offset": 0, 00:11:11.489 "data_size": 65536 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "name": "BaseBdev2", 00:11:11.489 "uuid": "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197", 00:11:11.489 "is_configured": true, 00:11:11.489 "data_offset": 0, 00:11:11.489 "data_size": 65536 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "name": "BaseBdev3", 00:11:11.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.489 "is_configured": false, 00:11:11.489 "data_offset": 0, 00:11:11.489 "data_size": 0 00:11:11.489 }, 00:11:11.489 { 00:11:11.489 "name": "BaseBdev4", 00:11:11.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.489 "is_configured": false, 00:11:11.489 "data_offset": 0, 00:11:11.489 "data_size": 0 00:11:11.489 } 00:11:11.489 ] 00:11:11.489 }' 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.489 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.059 [2024-11-18 10:39:37.767511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.059 BaseBdev3 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.059 [ 00:11:12.059 { 00:11:12.059 "name": "BaseBdev3", 00:11:12.059 "aliases": [ 00:11:12.059 "81a4a9a9-b2c4-4e82-a825-befcac5825b8" 00:11:12.059 ], 00:11:12.059 "product_name": "Malloc disk", 00:11:12.059 "block_size": 512, 00:11:12.059 "num_blocks": 65536, 00:11:12.059 "uuid": "81a4a9a9-b2c4-4e82-a825-befcac5825b8", 00:11:12.059 "assigned_rate_limits": { 00:11:12.059 "rw_ios_per_sec": 0, 00:11:12.059 "rw_mbytes_per_sec": 0, 00:11:12.059 "r_mbytes_per_sec": 0, 00:11:12.059 "w_mbytes_per_sec": 0 00:11:12.059 }, 00:11:12.059 "claimed": true, 00:11:12.059 "claim_type": "exclusive_write", 00:11:12.059 "zoned": false, 00:11:12.059 "supported_io_types": { 00:11:12.059 "read": true, 00:11:12.059 "write": true, 00:11:12.059 "unmap": true, 00:11:12.059 "flush": true, 00:11:12.059 "reset": true, 00:11:12.059 "nvme_admin": false, 00:11:12.059 "nvme_io": false, 00:11:12.059 "nvme_io_md": false, 00:11:12.059 "write_zeroes": true, 00:11:12.059 "zcopy": true, 00:11:12.059 "get_zone_info": false, 00:11:12.059 "zone_management": false, 00:11:12.059 "zone_append": false, 00:11:12.059 "compare": false, 00:11:12.059 "compare_and_write": false, 00:11:12.059 "abort": true, 00:11:12.059 "seek_hole": false, 00:11:12.059 "seek_data": false, 00:11:12.059 "copy": true, 00:11:12.059 "nvme_iov_md": false 00:11:12.059 }, 00:11:12.059 "memory_domains": [ 00:11:12.059 { 00:11:12.059 "dma_device_id": "system", 00:11:12.059 "dma_device_type": 1 00:11:12.059 }, 00:11:12.059 { 00:11:12.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.059 "dma_device_type": 2 00:11:12.059 } 00:11:12.059 ], 00:11:12.059 "driver_specific": {} 00:11:12.059 } 00:11:12.059 ] 00:11:12.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.060 "name": "Existed_Raid", 00:11:12.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.060 "strip_size_kb": 0, 00:11:12.060 "state": "configuring", 00:11:12.060 "raid_level": "raid1", 00:11:12.060 "superblock": false, 00:11:12.060 "num_base_bdevs": 4, 00:11:12.060 "num_base_bdevs_discovered": 3, 00:11:12.060 "num_base_bdevs_operational": 4, 00:11:12.060 "base_bdevs_list": [ 00:11:12.060 { 00:11:12.060 "name": "BaseBdev1", 00:11:12.060 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:12.060 "is_configured": true, 00:11:12.060 "data_offset": 0, 00:11:12.060 "data_size": 65536 00:11:12.060 }, 00:11:12.060 { 00:11:12.060 "name": "BaseBdev2", 00:11:12.060 "uuid": "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197", 00:11:12.060 "is_configured": true, 00:11:12.060 "data_offset": 0, 00:11:12.060 "data_size": 65536 00:11:12.060 }, 00:11:12.060 { 00:11:12.060 "name": "BaseBdev3", 00:11:12.060 "uuid": "81a4a9a9-b2c4-4e82-a825-befcac5825b8", 00:11:12.060 "is_configured": true, 00:11:12.060 "data_offset": 0, 00:11:12.060 "data_size": 65536 00:11:12.060 }, 00:11:12.060 { 00:11:12.060 "name": "BaseBdev4", 00:11:12.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.060 "is_configured": false, 00:11:12.060 "data_offset": 0, 00:11:12.060 "data_size": 0 00:11:12.060 } 00:11:12.060 ] 00:11:12.060 }' 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.060 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:12.629 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.629 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 [2024-11-18 10:39:38.271867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.629 [2024-11-18 10:39:38.271928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:12.629 [2024-11-18 10:39:38.271936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:12.629 [2024-11-18 10:39:38.272269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.629 [2024-11-18 10:39:38.272469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:12.629 [2024-11-18 10:39:38.272484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:12.629 [2024-11-18 10:39:38.272772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.630 BaseBdev4 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.630 [ 00:11:12.630 { 00:11:12.630 "name": "BaseBdev4", 00:11:12.630 "aliases": [ 00:11:12.630 "80b0c0cf-c9ad-46d3-98e2-5ed149ac16ea" 00:11:12.630 ], 00:11:12.630 "product_name": "Malloc disk", 00:11:12.630 "block_size": 512, 00:11:12.630 "num_blocks": 65536, 00:11:12.630 "uuid": "80b0c0cf-c9ad-46d3-98e2-5ed149ac16ea", 00:11:12.630 "assigned_rate_limits": { 00:11:12.630 "rw_ios_per_sec": 0, 00:11:12.630 "rw_mbytes_per_sec": 0, 00:11:12.630 "r_mbytes_per_sec": 0, 00:11:12.630 "w_mbytes_per_sec": 0 00:11:12.630 }, 00:11:12.630 "claimed": true, 00:11:12.630 "claim_type": "exclusive_write", 00:11:12.630 "zoned": false, 00:11:12.630 "supported_io_types": { 00:11:12.630 "read": true, 00:11:12.630 "write": true, 00:11:12.630 "unmap": true, 00:11:12.630 "flush": true, 00:11:12.630 "reset": true, 00:11:12.630 "nvme_admin": false, 00:11:12.630 "nvme_io": false, 00:11:12.630 "nvme_io_md": false, 00:11:12.630 "write_zeroes": true, 00:11:12.630 "zcopy": true, 00:11:12.630 "get_zone_info": false, 00:11:12.630 "zone_management": false, 00:11:12.630 "zone_append": false, 00:11:12.630 "compare": false, 00:11:12.630 "compare_and_write": false, 00:11:12.630 "abort": true, 00:11:12.630 "seek_hole": false, 00:11:12.630 "seek_data": false, 00:11:12.630 "copy": true, 00:11:12.630 "nvme_iov_md": false 00:11:12.630 }, 00:11:12.630 "memory_domains": [ 00:11:12.630 { 00:11:12.630 "dma_device_id": "system", 00:11:12.630 "dma_device_type": 1 00:11:12.630 }, 00:11:12.630 { 00:11:12.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.630 "dma_device_type": 2 00:11:12.630 } 00:11:12.630 ], 00:11:12.630 "driver_specific": {} 00:11:12.630 } 00:11:12.630 ] 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.630 "name": "Existed_Raid", 00:11:12.630 "uuid": "ee36b687-be11-49a3-9ddc-8fd4e075d016", 00:11:12.630 "strip_size_kb": 0, 00:11:12.630 "state": "online", 00:11:12.630 "raid_level": "raid1", 00:11:12.630 "superblock": false, 00:11:12.630 "num_base_bdevs": 4, 00:11:12.630 "num_base_bdevs_discovered": 4, 00:11:12.630 "num_base_bdevs_operational": 4, 00:11:12.630 "base_bdevs_list": [ 00:11:12.630 { 00:11:12.630 "name": "BaseBdev1", 00:11:12.630 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:12.630 "is_configured": true, 00:11:12.630 "data_offset": 0, 00:11:12.630 "data_size": 65536 00:11:12.630 }, 00:11:12.630 { 00:11:12.630 "name": "BaseBdev2", 00:11:12.630 "uuid": "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197", 00:11:12.630 "is_configured": true, 00:11:12.630 "data_offset": 0, 00:11:12.630 "data_size": 65536 00:11:12.630 }, 00:11:12.630 { 00:11:12.630 "name": "BaseBdev3", 00:11:12.630 "uuid": "81a4a9a9-b2c4-4e82-a825-befcac5825b8", 00:11:12.630 "is_configured": true, 00:11:12.630 "data_offset": 0, 00:11:12.630 "data_size": 65536 00:11:12.630 }, 00:11:12.630 { 00:11:12.630 "name": "BaseBdev4", 00:11:12.630 "uuid": "80b0c0cf-c9ad-46d3-98e2-5ed149ac16ea", 00:11:12.630 "is_configured": true, 00:11:12.630 "data_offset": 0, 00:11:12.630 "data_size": 65536 00:11:12.630 } 00:11:12.630 ] 00:11:12.630 }' 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.630 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.890 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.891 [2024-11-18 10:39:38.703481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.891 "name": "Existed_Raid", 00:11:12.891 "aliases": [ 00:11:12.891 "ee36b687-be11-49a3-9ddc-8fd4e075d016" 00:11:12.891 ], 00:11:12.891 "product_name": "Raid Volume", 00:11:12.891 "block_size": 512, 00:11:12.891 "num_blocks": 65536, 00:11:12.891 "uuid": "ee36b687-be11-49a3-9ddc-8fd4e075d016", 00:11:12.891 "assigned_rate_limits": { 00:11:12.891 "rw_ios_per_sec": 0, 00:11:12.891 "rw_mbytes_per_sec": 0, 00:11:12.891 "r_mbytes_per_sec": 0, 00:11:12.891 "w_mbytes_per_sec": 0 00:11:12.891 }, 00:11:12.891 "claimed": false, 00:11:12.891 "zoned": false, 00:11:12.891 "supported_io_types": { 00:11:12.891 "read": true, 00:11:12.891 "write": true, 00:11:12.891 "unmap": false, 00:11:12.891 "flush": false, 00:11:12.891 "reset": true, 00:11:12.891 "nvme_admin": false, 00:11:12.891 "nvme_io": false, 00:11:12.891 "nvme_io_md": false, 00:11:12.891 "write_zeroes": true, 00:11:12.891 "zcopy": false, 00:11:12.891 "get_zone_info": false, 00:11:12.891 "zone_management": false, 00:11:12.891 "zone_append": false, 00:11:12.891 "compare": false, 00:11:12.891 "compare_and_write": false, 00:11:12.891 "abort": false, 00:11:12.891 "seek_hole": false, 00:11:12.891 "seek_data": false, 00:11:12.891 "copy": false, 00:11:12.891 "nvme_iov_md": false 00:11:12.891 }, 00:11:12.891 "memory_domains": [ 00:11:12.891 { 00:11:12.891 "dma_device_id": "system", 00:11:12.891 "dma_device_type": 1 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.891 "dma_device_type": 2 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "system", 00:11:12.891 "dma_device_type": 1 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.891 "dma_device_type": 2 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "system", 00:11:12.891 "dma_device_type": 1 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.891 "dma_device_type": 2 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "system", 00:11:12.891 "dma_device_type": 1 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.891 "dma_device_type": 2 00:11:12.891 } 00:11:12.891 ], 00:11:12.891 "driver_specific": { 00:11:12.891 "raid": { 00:11:12.891 "uuid": "ee36b687-be11-49a3-9ddc-8fd4e075d016", 00:11:12.891 "strip_size_kb": 0, 00:11:12.891 "state": "online", 00:11:12.891 "raid_level": "raid1", 00:11:12.891 "superblock": false, 00:11:12.891 "num_base_bdevs": 4, 00:11:12.891 "num_base_bdevs_discovered": 4, 00:11:12.891 "num_base_bdevs_operational": 4, 00:11:12.891 "base_bdevs_list": [ 00:11:12.891 { 00:11:12.891 "name": "BaseBdev1", 00:11:12.891 "uuid": "aa4812b9-7dd2-4619-aa2c-982fa735492c", 00:11:12.891 "is_configured": true, 00:11:12.891 "data_offset": 0, 00:11:12.891 "data_size": 65536 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "name": "BaseBdev2", 00:11:12.891 "uuid": "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197", 00:11:12.891 "is_configured": true, 00:11:12.891 "data_offset": 0, 00:11:12.891 "data_size": 65536 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "name": "BaseBdev3", 00:11:12.891 "uuid": "81a4a9a9-b2c4-4e82-a825-befcac5825b8", 00:11:12.891 "is_configured": true, 00:11:12.891 "data_offset": 0, 00:11:12.891 "data_size": 65536 00:11:12.891 }, 00:11:12.891 { 00:11:12.891 "name": "BaseBdev4", 00:11:12.891 "uuid": "80b0c0cf-c9ad-46d3-98e2-5ed149ac16ea", 00:11:12.891 "is_configured": true, 00:11:12.891 "data_offset": 0, 00:11:12.891 "data_size": 65536 00:11:12.891 } 00:11:12.891 ] 00:11:12.891 } 00:11:12.891 } 00:11:12.891 }' 00:11:12.891 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.151 BaseBdev2 00:11:13.151 BaseBdev3 00:11:13.151 BaseBdev4' 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.151 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.152 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.152 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.412 [2024-11-18 10:39:39.050752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.412 "name": "Existed_Raid", 00:11:13.412 "uuid": "ee36b687-be11-49a3-9ddc-8fd4e075d016", 00:11:13.412 "strip_size_kb": 0, 00:11:13.412 "state": "online", 00:11:13.412 "raid_level": "raid1", 00:11:13.412 "superblock": false, 00:11:13.412 "num_base_bdevs": 4, 00:11:13.412 "num_base_bdevs_discovered": 3, 00:11:13.412 "num_base_bdevs_operational": 3, 00:11:13.412 "base_bdevs_list": [ 00:11:13.412 { 00:11:13.412 "name": null, 00:11:13.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.412 "is_configured": false, 00:11:13.412 "data_offset": 0, 00:11:13.412 "data_size": 65536 00:11:13.412 }, 00:11:13.412 { 00:11:13.412 "name": "BaseBdev2", 00:11:13.412 "uuid": "d90aa28e-4a49-4dd9-9d6d-24a9b3f67197", 00:11:13.412 "is_configured": true, 00:11:13.412 "data_offset": 0, 00:11:13.412 "data_size": 65536 00:11:13.412 }, 00:11:13.412 { 00:11:13.412 "name": "BaseBdev3", 00:11:13.412 "uuid": "81a4a9a9-b2c4-4e82-a825-befcac5825b8", 00:11:13.412 "is_configured": true, 00:11:13.412 "data_offset": 0, 00:11:13.412 "data_size": 65536 00:11:13.412 }, 00:11:13.412 { 00:11:13.412 "name": "BaseBdev4", 00:11:13.412 "uuid": "80b0c0cf-c9ad-46d3-98e2-5ed149ac16ea", 00:11:13.412 "is_configured": true, 00:11:13.412 "data_offset": 0, 00:11:13.412 "data_size": 65536 00:11:13.412 } 00:11:13.412 ] 00:11:13.412 }' 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.412 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.982 [2024-11-18 10:39:39.682614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.982 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.982 [2024-11-18 10:39:39.841893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.248 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.248 [2024-11-18 10:39:39.994654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:14.248 [2024-11-18 10:39:39.994765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.248 [2024-11-18 10:39:40.096734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.248 [2024-11-18 10:39:40.096792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.248 [2024-11-18 10:39:40.096805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:14.248 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.248 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.248 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.248 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.248 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:14.249 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.249 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.249 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.520 BaseBdev2 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.520 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.521 [ 00:11:14.521 { 00:11:14.521 "name": "BaseBdev2", 00:11:14.521 "aliases": [ 00:11:14.521 "58c6ae87-621d-4841-a064-abea030f3a31" 00:11:14.521 ], 00:11:14.521 "product_name": "Malloc disk", 00:11:14.521 "block_size": 512, 00:11:14.521 "num_blocks": 65536, 00:11:14.521 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:14.521 "assigned_rate_limits": { 00:11:14.521 "rw_ios_per_sec": 0, 00:11:14.521 "rw_mbytes_per_sec": 0, 00:11:14.521 "r_mbytes_per_sec": 0, 00:11:14.521 "w_mbytes_per_sec": 0 00:11:14.521 }, 00:11:14.521 "claimed": false, 00:11:14.521 "zoned": false, 00:11:14.521 "supported_io_types": { 00:11:14.521 "read": true, 00:11:14.521 "write": true, 00:11:14.521 "unmap": true, 00:11:14.521 "flush": true, 00:11:14.521 "reset": true, 00:11:14.521 "nvme_admin": false, 00:11:14.521 "nvme_io": false, 00:11:14.521 "nvme_io_md": false, 00:11:14.521 "write_zeroes": true, 00:11:14.521 "zcopy": true, 00:11:14.521 "get_zone_info": false, 00:11:14.521 "zone_management": false, 00:11:14.521 "zone_append": false, 00:11:14.521 "compare": false, 00:11:14.521 "compare_and_write": false, 00:11:14.521 "abort": true, 00:11:14.521 "seek_hole": false, 00:11:14.521 "seek_data": false, 00:11:14.521 "copy": true, 00:11:14.521 "nvme_iov_md": false 00:11:14.521 }, 00:11:14.521 "memory_domains": [ 00:11:14.521 { 00:11:14.521 "dma_device_id": "system", 00:11:14.521 "dma_device_type": 1 00:11:14.521 }, 00:11:14.521 { 00:11:14.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.521 "dma_device_type": 2 00:11:14.521 } 00:11:14.521 ], 00:11:14.521 "driver_specific": {} 00:11:14.521 } 00:11:14.521 ] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.521 BaseBdev3 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.521 [ 00:11:14.521 { 00:11:14.521 "name": "BaseBdev3", 00:11:14.521 "aliases": [ 00:11:14.521 "ffa1a934-2071-4492-b7bb-9ed829a8358b" 00:11:14.521 ], 00:11:14.521 "product_name": "Malloc disk", 00:11:14.521 "block_size": 512, 00:11:14.521 "num_blocks": 65536, 00:11:14.521 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:14.521 "assigned_rate_limits": { 00:11:14.521 "rw_ios_per_sec": 0, 00:11:14.521 "rw_mbytes_per_sec": 0, 00:11:14.521 "r_mbytes_per_sec": 0, 00:11:14.521 "w_mbytes_per_sec": 0 00:11:14.521 }, 00:11:14.521 "claimed": false, 00:11:14.521 "zoned": false, 00:11:14.521 "supported_io_types": { 00:11:14.521 "read": true, 00:11:14.521 "write": true, 00:11:14.521 "unmap": true, 00:11:14.521 "flush": true, 00:11:14.521 "reset": true, 00:11:14.521 "nvme_admin": false, 00:11:14.521 "nvme_io": false, 00:11:14.521 "nvme_io_md": false, 00:11:14.521 "write_zeroes": true, 00:11:14.521 "zcopy": true, 00:11:14.521 "get_zone_info": false, 00:11:14.521 "zone_management": false, 00:11:14.521 "zone_append": false, 00:11:14.521 "compare": false, 00:11:14.521 "compare_and_write": false, 00:11:14.521 "abort": true, 00:11:14.521 "seek_hole": false, 00:11:14.521 "seek_data": false, 00:11:14.521 "copy": true, 00:11:14.521 "nvme_iov_md": false 00:11:14.521 }, 00:11:14.521 "memory_domains": [ 00:11:14.521 { 00:11:14.521 "dma_device_id": "system", 00:11:14.521 "dma_device_type": 1 00:11:14.521 }, 00:11:14.521 { 00:11:14.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.521 "dma_device_type": 2 00:11:14.521 } 00:11:14.521 ], 00:11:14.521 "driver_specific": {} 00:11:14.521 } 00:11:14.521 ] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.521 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.522 BaseBdev4 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.522 [ 00:11:14.522 { 00:11:14.522 "name": "BaseBdev4", 00:11:14.522 "aliases": [ 00:11:14.522 "f70f585a-daee-45ac-b696-cc82535dd065" 00:11:14.522 ], 00:11:14.522 "product_name": "Malloc disk", 00:11:14.522 "block_size": 512, 00:11:14.522 "num_blocks": 65536, 00:11:14.522 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:14.522 "assigned_rate_limits": { 00:11:14.522 "rw_ios_per_sec": 0, 00:11:14.522 "rw_mbytes_per_sec": 0, 00:11:14.522 "r_mbytes_per_sec": 0, 00:11:14.522 "w_mbytes_per_sec": 0 00:11:14.522 }, 00:11:14.522 "claimed": false, 00:11:14.522 "zoned": false, 00:11:14.522 "supported_io_types": { 00:11:14.522 "read": true, 00:11:14.522 "write": true, 00:11:14.522 "unmap": true, 00:11:14.522 "flush": true, 00:11:14.522 "reset": true, 00:11:14.522 "nvme_admin": false, 00:11:14.522 "nvme_io": false, 00:11:14.522 "nvme_io_md": false, 00:11:14.522 "write_zeroes": true, 00:11:14.522 "zcopy": true, 00:11:14.522 "get_zone_info": false, 00:11:14.522 "zone_management": false, 00:11:14.522 "zone_append": false, 00:11:14.522 "compare": false, 00:11:14.522 "compare_and_write": false, 00:11:14.522 "abort": true, 00:11:14.522 "seek_hole": false, 00:11:14.522 "seek_data": false, 00:11:14.522 "copy": true, 00:11:14.522 "nvme_iov_md": false 00:11:14.522 }, 00:11:14.522 "memory_domains": [ 00:11:14.522 { 00:11:14.522 "dma_device_id": "system", 00:11:14.522 "dma_device_type": 1 00:11:14.522 }, 00:11:14.522 { 00:11:14.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.522 "dma_device_type": 2 00:11:14.522 } 00:11:14.522 ], 00:11:14.522 "driver_specific": {} 00:11:14.522 } 00:11:14.522 ] 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.522 [2024-11-18 10:39:40.391433] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.522 [2024-11-18 10:39:40.391486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.522 [2024-11-18 10:39:40.391506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.522 [2024-11-18 10:39:40.393466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.522 [2024-11-18 10:39:40.393515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.522 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.782 "name": "Existed_Raid", 00:11:14.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.782 "strip_size_kb": 0, 00:11:14.782 "state": "configuring", 00:11:14.782 "raid_level": "raid1", 00:11:14.782 "superblock": false, 00:11:14.782 "num_base_bdevs": 4, 00:11:14.782 "num_base_bdevs_discovered": 3, 00:11:14.782 "num_base_bdevs_operational": 4, 00:11:14.782 "base_bdevs_list": [ 00:11:14.782 { 00:11:14.782 "name": "BaseBdev1", 00:11:14.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.782 "is_configured": false, 00:11:14.782 "data_offset": 0, 00:11:14.782 "data_size": 0 00:11:14.782 }, 00:11:14.782 { 00:11:14.782 "name": "BaseBdev2", 00:11:14.782 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:14.782 "is_configured": true, 00:11:14.782 "data_offset": 0, 00:11:14.782 "data_size": 65536 00:11:14.782 }, 00:11:14.782 { 00:11:14.782 "name": "BaseBdev3", 00:11:14.782 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:14.782 "is_configured": true, 00:11:14.782 "data_offset": 0, 00:11:14.782 "data_size": 65536 00:11:14.782 }, 00:11:14.782 { 00:11:14.782 "name": "BaseBdev4", 00:11:14.782 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:14.782 "is_configured": true, 00:11:14.782 "data_offset": 0, 00:11:14.782 "data_size": 65536 00:11:14.782 } 00:11:14.782 ] 00:11:14.782 }' 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.782 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.042 [2024-11-18 10:39:40.858798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.042 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.042 "name": "Existed_Raid", 00:11:15.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.042 "strip_size_kb": 0, 00:11:15.042 "state": "configuring", 00:11:15.042 "raid_level": "raid1", 00:11:15.042 "superblock": false, 00:11:15.042 "num_base_bdevs": 4, 00:11:15.042 "num_base_bdevs_discovered": 2, 00:11:15.042 "num_base_bdevs_operational": 4, 00:11:15.043 "base_bdevs_list": [ 00:11:15.043 { 00:11:15.043 "name": "BaseBdev1", 00:11:15.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.043 "is_configured": false, 00:11:15.043 "data_offset": 0, 00:11:15.043 "data_size": 0 00:11:15.043 }, 00:11:15.043 { 00:11:15.043 "name": null, 00:11:15.043 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:15.043 "is_configured": false, 00:11:15.043 "data_offset": 0, 00:11:15.043 "data_size": 65536 00:11:15.043 }, 00:11:15.043 { 00:11:15.043 "name": "BaseBdev3", 00:11:15.043 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:15.043 "is_configured": true, 00:11:15.043 "data_offset": 0, 00:11:15.043 "data_size": 65536 00:11:15.043 }, 00:11:15.043 { 00:11:15.043 "name": "BaseBdev4", 00:11:15.043 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:15.043 "is_configured": true, 00:11:15.043 "data_offset": 0, 00:11:15.043 "data_size": 65536 00:11:15.043 } 00:11:15.043 ] 00:11:15.043 }' 00:11:15.043 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.043 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.613 [2024-11-18 10:39:41.379276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.613 BaseBdev1 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.613 [ 00:11:15.613 { 00:11:15.613 "name": "BaseBdev1", 00:11:15.613 "aliases": [ 00:11:15.613 "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0" 00:11:15.613 ], 00:11:15.613 "product_name": "Malloc disk", 00:11:15.613 "block_size": 512, 00:11:15.613 "num_blocks": 65536, 00:11:15.613 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:15.613 "assigned_rate_limits": { 00:11:15.613 "rw_ios_per_sec": 0, 00:11:15.613 "rw_mbytes_per_sec": 0, 00:11:15.613 "r_mbytes_per_sec": 0, 00:11:15.613 "w_mbytes_per_sec": 0 00:11:15.613 }, 00:11:15.613 "claimed": true, 00:11:15.613 "claim_type": "exclusive_write", 00:11:15.613 "zoned": false, 00:11:15.613 "supported_io_types": { 00:11:15.613 "read": true, 00:11:15.613 "write": true, 00:11:15.613 "unmap": true, 00:11:15.613 "flush": true, 00:11:15.613 "reset": true, 00:11:15.613 "nvme_admin": false, 00:11:15.613 "nvme_io": false, 00:11:15.613 "nvme_io_md": false, 00:11:15.613 "write_zeroes": true, 00:11:15.613 "zcopy": true, 00:11:15.613 "get_zone_info": false, 00:11:15.613 "zone_management": false, 00:11:15.613 "zone_append": false, 00:11:15.613 "compare": false, 00:11:15.613 "compare_and_write": false, 00:11:15.613 "abort": true, 00:11:15.613 "seek_hole": false, 00:11:15.613 "seek_data": false, 00:11:15.613 "copy": true, 00:11:15.613 "nvme_iov_md": false 00:11:15.613 }, 00:11:15.613 "memory_domains": [ 00:11:15.613 { 00:11:15.613 "dma_device_id": "system", 00:11:15.613 "dma_device_type": 1 00:11:15.613 }, 00:11:15.613 { 00:11:15.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.613 "dma_device_type": 2 00:11:15.613 } 00:11:15.613 ], 00:11:15.613 "driver_specific": {} 00:11:15.613 } 00:11:15.613 ] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.613 "name": "Existed_Raid", 00:11:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.613 "strip_size_kb": 0, 00:11:15.613 "state": "configuring", 00:11:15.613 "raid_level": "raid1", 00:11:15.613 "superblock": false, 00:11:15.613 "num_base_bdevs": 4, 00:11:15.613 "num_base_bdevs_discovered": 3, 00:11:15.613 "num_base_bdevs_operational": 4, 00:11:15.613 "base_bdevs_list": [ 00:11:15.613 { 00:11:15.613 "name": "BaseBdev1", 00:11:15.613 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:15.613 "is_configured": true, 00:11:15.613 "data_offset": 0, 00:11:15.613 "data_size": 65536 00:11:15.613 }, 00:11:15.613 { 00:11:15.613 "name": null, 00:11:15.613 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:15.613 "is_configured": false, 00:11:15.613 "data_offset": 0, 00:11:15.613 "data_size": 65536 00:11:15.613 }, 00:11:15.613 { 00:11:15.613 "name": "BaseBdev3", 00:11:15.613 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:15.613 "is_configured": true, 00:11:15.613 "data_offset": 0, 00:11:15.613 "data_size": 65536 00:11:15.613 }, 00:11:15.613 { 00:11:15.613 "name": "BaseBdev4", 00:11:15.613 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:15.613 "is_configured": true, 00:11:15.613 "data_offset": 0, 00:11:15.613 "data_size": 65536 00:11:15.613 } 00:11:15.613 ] 00:11:15.613 }' 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.613 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.180 [2024-11-18 10:39:41.918525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.180 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.181 "name": "Existed_Raid", 00:11:16.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.181 "strip_size_kb": 0, 00:11:16.181 "state": "configuring", 00:11:16.181 "raid_level": "raid1", 00:11:16.181 "superblock": false, 00:11:16.181 "num_base_bdevs": 4, 00:11:16.181 "num_base_bdevs_discovered": 2, 00:11:16.181 "num_base_bdevs_operational": 4, 00:11:16.181 "base_bdevs_list": [ 00:11:16.181 { 00:11:16.181 "name": "BaseBdev1", 00:11:16.181 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:16.181 "is_configured": true, 00:11:16.181 "data_offset": 0, 00:11:16.181 "data_size": 65536 00:11:16.181 }, 00:11:16.181 { 00:11:16.181 "name": null, 00:11:16.181 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:16.181 "is_configured": false, 00:11:16.181 "data_offset": 0, 00:11:16.181 "data_size": 65536 00:11:16.181 }, 00:11:16.181 { 00:11:16.181 "name": null, 00:11:16.181 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:16.181 "is_configured": false, 00:11:16.181 "data_offset": 0, 00:11:16.181 "data_size": 65536 00:11:16.181 }, 00:11:16.181 { 00:11:16.181 "name": "BaseBdev4", 00:11:16.181 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:16.181 "is_configured": true, 00:11:16.181 "data_offset": 0, 00:11:16.181 "data_size": 65536 00:11:16.181 } 00:11:16.181 ] 00:11:16.181 }' 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.181 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.495 [2024-11-18 10:39:42.349750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.495 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.496 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.755 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.755 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.755 "name": "Existed_Raid", 00:11:16.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.755 "strip_size_kb": 0, 00:11:16.755 "state": "configuring", 00:11:16.755 "raid_level": "raid1", 00:11:16.755 "superblock": false, 00:11:16.755 "num_base_bdevs": 4, 00:11:16.755 "num_base_bdevs_discovered": 3, 00:11:16.755 "num_base_bdevs_operational": 4, 00:11:16.755 "base_bdevs_list": [ 00:11:16.755 { 00:11:16.755 "name": "BaseBdev1", 00:11:16.755 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:16.755 "is_configured": true, 00:11:16.755 "data_offset": 0, 00:11:16.755 "data_size": 65536 00:11:16.755 }, 00:11:16.755 { 00:11:16.755 "name": null, 00:11:16.755 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:16.755 "is_configured": false, 00:11:16.755 "data_offset": 0, 00:11:16.755 "data_size": 65536 00:11:16.755 }, 00:11:16.755 { 00:11:16.755 "name": "BaseBdev3", 00:11:16.755 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:16.755 "is_configured": true, 00:11:16.755 "data_offset": 0, 00:11:16.755 "data_size": 65536 00:11:16.755 }, 00:11:16.755 { 00:11:16.755 "name": "BaseBdev4", 00:11:16.755 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:16.755 "is_configured": true, 00:11:16.755 "data_offset": 0, 00:11:16.755 "data_size": 65536 00:11:16.755 } 00:11:16.755 ] 00:11:16.755 }' 00:11:16.755 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.755 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.016 [2024-11-18 10:39:42.789040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.016 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.276 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.276 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.276 "name": "Existed_Raid", 00:11:17.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.276 "strip_size_kb": 0, 00:11:17.276 "state": "configuring", 00:11:17.276 "raid_level": "raid1", 00:11:17.276 "superblock": false, 00:11:17.276 "num_base_bdevs": 4, 00:11:17.276 "num_base_bdevs_discovered": 2, 00:11:17.276 "num_base_bdevs_operational": 4, 00:11:17.276 "base_bdevs_list": [ 00:11:17.276 { 00:11:17.276 "name": null, 00:11:17.276 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:17.276 "is_configured": false, 00:11:17.276 "data_offset": 0, 00:11:17.276 "data_size": 65536 00:11:17.276 }, 00:11:17.276 { 00:11:17.276 "name": null, 00:11:17.276 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:17.276 "is_configured": false, 00:11:17.276 "data_offset": 0, 00:11:17.276 "data_size": 65536 00:11:17.276 }, 00:11:17.276 { 00:11:17.276 "name": "BaseBdev3", 00:11:17.276 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:17.276 "is_configured": true, 00:11:17.276 "data_offset": 0, 00:11:17.276 "data_size": 65536 00:11:17.276 }, 00:11:17.276 { 00:11:17.276 "name": "BaseBdev4", 00:11:17.276 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:17.276 "is_configured": true, 00:11:17.276 "data_offset": 0, 00:11:17.276 "data_size": 65536 00:11:17.276 } 00:11:17.276 ] 00:11:17.276 }' 00:11:17.276 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.276 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.536 [2024-11-18 10:39:43.362539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.536 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.798 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.798 "name": "Existed_Raid", 00:11:17.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.798 "strip_size_kb": 0, 00:11:17.798 "state": "configuring", 00:11:17.798 "raid_level": "raid1", 00:11:17.798 "superblock": false, 00:11:17.798 "num_base_bdevs": 4, 00:11:17.798 "num_base_bdevs_discovered": 3, 00:11:17.798 "num_base_bdevs_operational": 4, 00:11:17.798 "base_bdevs_list": [ 00:11:17.798 { 00:11:17.798 "name": null, 00:11:17.798 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:17.798 "is_configured": false, 00:11:17.798 "data_offset": 0, 00:11:17.798 "data_size": 65536 00:11:17.798 }, 00:11:17.798 { 00:11:17.798 "name": "BaseBdev2", 00:11:17.798 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:17.798 "is_configured": true, 00:11:17.798 "data_offset": 0, 00:11:17.798 "data_size": 65536 00:11:17.798 }, 00:11:17.798 { 00:11:17.798 "name": "BaseBdev3", 00:11:17.798 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:17.798 "is_configured": true, 00:11:17.798 "data_offset": 0, 00:11:17.798 "data_size": 65536 00:11:17.798 }, 00:11:17.798 { 00:11:17.798 "name": "BaseBdev4", 00:11:17.798 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:17.798 "is_configured": true, 00:11:17.798 "data_offset": 0, 00:11:17.798 "data_size": 65536 00:11:17.798 } 00:11:17.798 ] 00:11:17.798 }' 00:11:17.798 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.798 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.058 [2024-11-18 10:39:43.895710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.058 [2024-11-18 10:39:43.895766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:18.058 [2024-11-18 10:39:43.895776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:18.058 [2024-11-18 10:39:43.896081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:18.058 [2024-11-18 10:39:43.896285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:18.058 [2024-11-18 10:39:43.896300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:18.058 [2024-11-18 10:39:43.896563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.058 NewBaseBdev 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.058 [ 00:11:18.058 { 00:11:18.058 "name": "NewBaseBdev", 00:11:18.058 "aliases": [ 00:11:18.058 "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0" 00:11:18.058 ], 00:11:18.058 "product_name": "Malloc disk", 00:11:18.058 "block_size": 512, 00:11:18.058 "num_blocks": 65536, 00:11:18.058 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:18.058 "assigned_rate_limits": { 00:11:18.058 "rw_ios_per_sec": 0, 00:11:18.058 "rw_mbytes_per_sec": 0, 00:11:18.058 "r_mbytes_per_sec": 0, 00:11:18.058 "w_mbytes_per_sec": 0 00:11:18.058 }, 00:11:18.058 "claimed": true, 00:11:18.058 "claim_type": "exclusive_write", 00:11:18.058 "zoned": false, 00:11:18.058 "supported_io_types": { 00:11:18.058 "read": true, 00:11:18.058 "write": true, 00:11:18.058 "unmap": true, 00:11:18.058 "flush": true, 00:11:18.058 "reset": true, 00:11:18.058 "nvme_admin": false, 00:11:18.058 "nvme_io": false, 00:11:18.058 "nvme_io_md": false, 00:11:18.058 "write_zeroes": true, 00:11:18.058 "zcopy": true, 00:11:18.058 "get_zone_info": false, 00:11:18.058 "zone_management": false, 00:11:18.058 "zone_append": false, 00:11:18.058 "compare": false, 00:11:18.058 "compare_and_write": false, 00:11:18.058 "abort": true, 00:11:18.058 "seek_hole": false, 00:11:18.058 "seek_data": false, 00:11:18.058 "copy": true, 00:11:18.058 "nvme_iov_md": false 00:11:18.058 }, 00:11:18.058 "memory_domains": [ 00:11:18.058 { 00:11:18.058 "dma_device_id": "system", 00:11:18.058 "dma_device_type": 1 00:11:18.058 }, 00:11:18.058 { 00:11:18.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.058 "dma_device_type": 2 00:11:18.058 } 00:11:18.058 ], 00:11:18.058 "driver_specific": {} 00:11:18.058 } 00:11:18.058 ] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.058 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.318 "name": "Existed_Raid", 00:11:18.318 "uuid": "5020ec21-ead1-49d5-bc24-935ac248ebd7", 00:11:18.318 "strip_size_kb": 0, 00:11:18.318 "state": "online", 00:11:18.318 "raid_level": "raid1", 00:11:18.318 "superblock": false, 00:11:18.318 "num_base_bdevs": 4, 00:11:18.318 "num_base_bdevs_discovered": 4, 00:11:18.318 "num_base_bdevs_operational": 4, 00:11:18.318 "base_bdevs_list": [ 00:11:18.318 { 00:11:18.318 "name": "NewBaseBdev", 00:11:18.318 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:18.318 "is_configured": true, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 65536 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "name": "BaseBdev2", 00:11:18.318 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:18.318 "is_configured": true, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 65536 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "name": "BaseBdev3", 00:11:18.318 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:18.318 "is_configured": true, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 65536 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "name": "BaseBdev4", 00:11:18.318 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:18.318 "is_configured": true, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 65536 00:11:18.318 } 00:11:18.318 ] 00:11:18.318 }' 00:11:18.318 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.318 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.578 [2024-11-18 10:39:44.343361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.578 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.578 "name": "Existed_Raid", 00:11:18.578 "aliases": [ 00:11:18.578 "5020ec21-ead1-49d5-bc24-935ac248ebd7" 00:11:18.578 ], 00:11:18.578 "product_name": "Raid Volume", 00:11:18.578 "block_size": 512, 00:11:18.578 "num_blocks": 65536, 00:11:18.578 "uuid": "5020ec21-ead1-49d5-bc24-935ac248ebd7", 00:11:18.578 "assigned_rate_limits": { 00:11:18.578 "rw_ios_per_sec": 0, 00:11:18.578 "rw_mbytes_per_sec": 0, 00:11:18.578 "r_mbytes_per_sec": 0, 00:11:18.578 "w_mbytes_per_sec": 0 00:11:18.578 }, 00:11:18.578 "claimed": false, 00:11:18.578 "zoned": false, 00:11:18.578 "supported_io_types": { 00:11:18.578 "read": true, 00:11:18.578 "write": true, 00:11:18.578 "unmap": false, 00:11:18.578 "flush": false, 00:11:18.578 "reset": true, 00:11:18.578 "nvme_admin": false, 00:11:18.578 "nvme_io": false, 00:11:18.578 "nvme_io_md": false, 00:11:18.578 "write_zeroes": true, 00:11:18.578 "zcopy": false, 00:11:18.578 "get_zone_info": false, 00:11:18.578 "zone_management": false, 00:11:18.578 "zone_append": false, 00:11:18.578 "compare": false, 00:11:18.578 "compare_and_write": false, 00:11:18.578 "abort": false, 00:11:18.578 "seek_hole": false, 00:11:18.578 "seek_data": false, 00:11:18.578 "copy": false, 00:11:18.578 "nvme_iov_md": false 00:11:18.578 }, 00:11:18.578 "memory_domains": [ 00:11:18.578 { 00:11:18.578 "dma_device_id": "system", 00:11:18.578 "dma_device_type": 1 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.578 "dma_device_type": 2 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "system", 00:11:18.578 "dma_device_type": 1 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.578 "dma_device_type": 2 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "system", 00:11:18.578 "dma_device_type": 1 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.578 "dma_device_type": 2 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "system", 00:11:18.578 "dma_device_type": 1 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.578 "dma_device_type": 2 00:11:18.578 } 00:11:18.578 ], 00:11:18.578 "driver_specific": { 00:11:18.578 "raid": { 00:11:18.578 "uuid": "5020ec21-ead1-49d5-bc24-935ac248ebd7", 00:11:18.578 "strip_size_kb": 0, 00:11:18.578 "state": "online", 00:11:18.578 "raid_level": "raid1", 00:11:18.578 "superblock": false, 00:11:18.578 "num_base_bdevs": 4, 00:11:18.578 "num_base_bdevs_discovered": 4, 00:11:18.578 "num_base_bdevs_operational": 4, 00:11:18.578 "base_bdevs_list": [ 00:11:18.578 { 00:11:18.578 "name": "NewBaseBdev", 00:11:18.578 "uuid": "3532a1b1-f5dc-48b0-86cc-6a80f2ce49c0", 00:11:18.578 "is_configured": true, 00:11:18.578 "data_offset": 0, 00:11:18.578 "data_size": 65536 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "name": "BaseBdev2", 00:11:18.578 "uuid": "58c6ae87-621d-4841-a064-abea030f3a31", 00:11:18.578 "is_configured": true, 00:11:18.578 "data_offset": 0, 00:11:18.578 "data_size": 65536 00:11:18.578 }, 00:11:18.578 { 00:11:18.578 "name": "BaseBdev3", 00:11:18.578 "uuid": "ffa1a934-2071-4492-b7bb-9ed829a8358b", 00:11:18.578 "is_configured": true, 00:11:18.578 "data_offset": 0, 00:11:18.578 "data_size": 65536 00:11:18.579 }, 00:11:18.579 { 00:11:18.579 "name": "BaseBdev4", 00:11:18.579 "uuid": "f70f585a-daee-45ac-b696-cc82535dd065", 00:11:18.579 "is_configured": true, 00:11:18.579 "data_offset": 0, 00:11:18.579 "data_size": 65536 00:11:18.579 } 00:11:18.579 ] 00:11:18.579 } 00:11:18.579 } 00:11:18.579 }' 00:11:18.579 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.579 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:18.579 BaseBdev2 00:11:18.579 BaseBdev3 00:11:18.579 BaseBdev4' 00:11:18.579 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.838 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 [2024-11-18 10:39:44.670599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.839 [2024-11-18 10:39:44.670629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.839 [2024-11-18 10:39:44.670711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.839 [2024-11-18 10:39:44.671045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.839 [2024-11-18 10:39:44.671071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73032 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73032 ']' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73032 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73032 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.839 killing process with pid 73032 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73032' 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73032 00:11:18.839 [2024-11-18 10:39:44.712897] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.839 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73032 00:11:19.407 [2024-11-18 10:39:45.126119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:20.788 00:11:20.788 real 0m11.339s 00:11:20.788 user 0m17.753s 00:11:20.788 sys 0m2.138s 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.788 ************************************ 00:11:20.788 END TEST raid_state_function_test 00:11:20.788 ************************************ 00:11:20.788 10:39:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:20.788 10:39:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:20.788 10:39:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.788 10:39:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.788 ************************************ 00:11:20.788 START TEST raid_state_function_test_sb 00:11:20.788 ************************************ 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73703 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73703' 00:11:20.788 Process raid pid: 73703 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73703 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73703 ']' 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.788 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.789 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.789 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.789 [2024-11-18 10:39:46.443166] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:20.789 [2024-11-18 10:39:46.443355] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.789 [2024-11-18 10:39:46.617371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.048 [2024-11-18 10:39:46.749383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.306 [2024-11-18 10:39:46.980881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.306 [2024-11-18 10:39:46.981031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.564 [2024-11-18 10:39:47.271308] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.564 [2024-11-18 10:39:47.271367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.564 [2024-11-18 10:39:47.271378] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.564 [2024-11-18 10:39:47.271388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.564 [2024-11-18 10:39:47.271394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.564 [2024-11-18 10:39:47.271403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.564 [2024-11-18 10:39:47.271415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.564 [2024-11-18 10:39:47.271425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.564 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.564 "name": "Existed_Raid", 00:11:21.564 "uuid": "410ef488-b6cc-4cac-9a89-affe4ba183e2", 00:11:21.564 "strip_size_kb": 0, 00:11:21.564 "state": "configuring", 00:11:21.564 "raid_level": "raid1", 00:11:21.564 "superblock": true, 00:11:21.564 "num_base_bdevs": 4, 00:11:21.564 "num_base_bdevs_discovered": 0, 00:11:21.565 "num_base_bdevs_operational": 4, 00:11:21.565 "base_bdevs_list": [ 00:11:21.565 { 00:11:21.565 "name": "BaseBdev1", 00:11:21.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.565 "is_configured": false, 00:11:21.565 "data_offset": 0, 00:11:21.565 "data_size": 0 00:11:21.565 }, 00:11:21.565 { 00:11:21.565 "name": "BaseBdev2", 00:11:21.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.565 "is_configured": false, 00:11:21.565 "data_offset": 0, 00:11:21.565 "data_size": 0 00:11:21.565 }, 00:11:21.565 { 00:11:21.565 "name": "BaseBdev3", 00:11:21.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.565 "is_configured": false, 00:11:21.565 "data_offset": 0, 00:11:21.565 "data_size": 0 00:11:21.565 }, 00:11:21.565 { 00:11:21.565 "name": "BaseBdev4", 00:11:21.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.565 "is_configured": false, 00:11:21.565 "data_offset": 0, 00:11:21.565 "data_size": 0 00:11:21.565 } 00:11:21.565 ] 00:11:21.565 }' 00:11:21.565 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.565 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.131 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.131 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.131 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 [2024-11-18 10:39:47.738461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.132 [2024-11-18 10:39:47.738576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 [2024-11-18 10:39:47.750446] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.132 [2024-11-18 10:39:47.750526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.132 [2024-11-18 10:39:47.750554] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.132 [2024-11-18 10:39:47.750576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.132 [2024-11-18 10:39:47.750594] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.132 [2024-11-18 10:39:47.750614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.132 [2024-11-18 10:39:47.750631] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.132 [2024-11-18 10:39:47.750652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 [2024-11-18 10:39:47.802896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.132 BaseBdev1 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 [ 00:11:22.132 { 00:11:22.132 "name": "BaseBdev1", 00:11:22.132 "aliases": [ 00:11:22.132 "42d2ce5a-3e87-4bfe-adff-43af16723a14" 00:11:22.132 ], 00:11:22.132 "product_name": "Malloc disk", 00:11:22.132 "block_size": 512, 00:11:22.132 "num_blocks": 65536, 00:11:22.132 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:22.132 "assigned_rate_limits": { 00:11:22.132 "rw_ios_per_sec": 0, 00:11:22.132 "rw_mbytes_per_sec": 0, 00:11:22.132 "r_mbytes_per_sec": 0, 00:11:22.132 "w_mbytes_per_sec": 0 00:11:22.132 }, 00:11:22.132 "claimed": true, 00:11:22.132 "claim_type": "exclusive_write", 00:11:22.132 "zoned": false, 00:11:22.132 "supported_io_types": { 00:11:22.132 "read": true, 00:11:22.132 "write": true, 00:11:22.132 "unmap": true, 00:11:22.132 "flush": true, 00:11:22.132 "reset": true, 00:11:22.132 "nvme_admin": false, 00:11:22.132 "nvme_io": false, 00:11:22.132 "nvme_io_md": false, 00:11:22.132 "write_zeroes": true, 00:11:22.132 "zcopy": true, 00:11:22.132 "get_zone_info": false, 00:11:22.132 "zone_management": false, 00:11:22.132 "zone_append": false, 00:11:22.132 "compare": false, 00:11:22.132 "compare_and_write": false, 00:11:22.132 "abort": true, 00:11:22.132 "seek_hole": false, 00:11:22.132 "seek_data": false, 00:11:22.132 "copy": true, 00:11:22.132 "nvme_iov_md": false 00:11:22.132 }, 00:11:22.132 "memory_domains": [ 00:11:22.132 { 00:11:22.132 "dma_device_id": "system", 00:11:22.132 "dma_device_type": 1 00:11:22.132 }, 00:11:22.132 { 00:11:22.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.132 "dma_device_type": 2 00:11:22.132 } 00:11:22.132 ], 00:11:22.132 "driver_specific": {} 00:11:22.132 } 00:11:22.132 ] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.132 "name": "Existed_Raid", 00:11:22.132 "uuid": "23a4dda6-1a2b-4b14-89cb-26cccb6b2c32", 00:11:22.132 "strip_size_kb": 0, 00:11:22.132 "state": "configuring", 00:11:22.132 "raid_level": "raid1", 00:11:22.132 "superblock": true, 00:11:22.132 "num_base_bdevs": 4, 00:11:22.132 "num_base_bdevs_discovered": 1, 00:11:22.132 "num_base_bdevs_operational": 4, 00:11:22.132 "base_bdevs_list": [ 00:11:22.132 { 00:11:22.132 "name": "BaseBdev1", 00:11:22.132 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:22.132 "is_configured": true, 00:11:22.132 "data_offset": 2048, 00:11:22.132 "data_size": 63488 00:11:22.132 }, 00:11:22.132 { 00:11:22.132 "name": "BaseBdev2", 00:11:22.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.132 "is_configured": false, 00:11:22.132 "data_offset": 0, 00:11:22.132 "data_size": 0 00:11:22.132 }, 00:11:22.132 { 00:11:22.132 "name": "BaseBdev3", 00:11:22.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.132 "is_configured": false, 00:11:22.132 "data_offset": 0, 00:11:22.132 "data_size": 0 00:11:22.132 }, 00:11:22.132 { 00:11:22.132 "name": "BaseBdev4", 00:11:22.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.132 "is_configured": false, 00:11:22.132 "data_offset": 0, 00:11:22.132 "data_size": 0 00:11:22.132 } 00:11:22.132 ] 00:11:22.132 }' 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.132 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.701 [2024-11-18 10:39:48.290061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.701 [2024-11-18 10:39:48.290105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.701 [2024-11-18 10:39:48.302093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.701 [2024-11-18 10:39:48.304090] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.701 [2024-11-18 10:39:48.304132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.701 [2024-11-18 10:39:48.304143] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.701 [2024-11-18 10:39:48.304153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.701 [2024-11-18 10:39:48.304159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.701 [2024-11-18 10:39:48.304191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.701 "name": "Existed_Raid", 00:11:22.701 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:22.701 "strip_size_kb": 0, 00:11:22.701 "state": "configuring", 00:11:22.701 "raid_level": "raid1", 00:11:22.701 "superblock": true, 00:11:22.701 "num_base_bdevs": 4, 00:11:22.701 "num_base_bdevs_discovered": 1, 00:11:22.701 "num_base_bdevs_operational": 4, 00:11:22.701 "base_bdevs_list": [ 00:11:22.701 { 00:11:22.701 "name": "BaseBdev1", 00:11:22.701 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:22.701 "is_configured": true, 00:11:22.701 "data_offset": 2048, 00:11:22.701 "data_size": 63488 00:11:22.701 }, 00:11:22.701 { 00:11:22.701 "name": "BaseBdev2", 00:11:22.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.701 "is_configured": false, 00:11:22.701 "data_offset": 0, 00:11:22.701 "data_size": 0 00:11:22.701 }, 00:11:22.701 { 00:11:22.701 "name": "BaseBdev3", 00:11:22.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.701 "is_configured": false, 00:11:22.701 "data_offset": 0, 00:11:22.701 "data_size": 0 00:11:22.701 }, 00:11:22.701 { 00:11:22.701 "name": "BaseBdev4", 00:11:22.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.701 "is_configured": false, 00:11:22.701 "data_offset": 0, 00:11:22.701 "data_size": 0 00:11:22.701 } 00:11:22.701 ] 00:11:22.701 }' 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.701 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.960 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.960 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.960 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 [2024-11-18 10:39:48.792691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.961 BaseBdev2 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 [ 00:11:22.961 { 00:11:22.961 "name": "BaseBdev2", 00:11:22.961 "aliases": [ 00:11:22.961 "578807d9-4a55-49b4-b439-937e2ff7ab48" 00:11:22.961 ], 00:11:22.961 "product_name": "Malloc disk", 00:11:22.961 "block_size": 512, 00:11:22.961 "num_blocks": 65536, 00:11:22.961 "uuid": "578807d9-4a55-49b4-b439-937e2ff7ab48", 00:11:22.961 "assigned_rate_limits": { 00:11:22.961 "rw_ios_per_sec": 0, 00:11:22.961 "rw_mbytes_per_sec": 0, 00:11:22.961 "r_mbytes_per_sec": 0, 00:11:22.961 "w_mbytes_per_sec": 0 00:11:22.961 }, 00:11:22.961 "claimed": true, 00:11:22.961 "claim_type": "exclusive_write", 00:11:22.961 "zoned": false, 00:11:22.961 "supported_io_types": { 00:11:22.961 "read": true, 00:11:22.961 "write": true, 00:11:22.961 "unmap": true, 00:11:22.961 "flush": true, 00:11:22.961 "reset": true, 00:11:22.961 "nvme_admin": false, 00:11:22.961 "nvme_io": false, 00:11:22.961 "nvme_io_md": false, 00:11:22.961 "write_zeroes": true, 00:11:22.961 "zcopy": true, 00:11:22.961 "get_zone_info": false, 00:11:22.961 "zone_management": false, 00:11:22.961 "zone_append": false, 00:11:22.961 "compare": false, 00:11:22.961 "compare_and_write": false, 00:11:22.961 "abort": true, 00:11:22.961 "seek_hole": false, 00:11:22.961 "seek_data": false, 00:11:22.961 "copy": true, 00:11:22.961 "nvme_iov_md": false 00:11:22.961 }, 00:11:22.961 "memory_domains": [ 00:11:22.961 { 00:11:22.961 "dma_device_id": "system", 00:11:22.961 "dma_device_type": 1 00:11:22.961 }, 00:11:22.961 { 00:11:22.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.961 "dma_device_type": 2 00:11:22.961 } 00:11:22.961 ], 00:11:22.961 "driver_specific": {} 00:11:22.961 } 00:11:22.961 ] 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.219 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.219 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.219 "name": "Existed_Raid", 00:11:23.219 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:23.219 "strip_size_kb": 0, 00:11:23.219 "state": "configuring", 00:11:23.219 "raid_level": "raid1", 00:11:23.219 "superblock": true, 00:11:23.219 "num_base_bdevs": 4, 00:11:23.219 "num_base_bdevs_discovered": 2, 00:11:23.219 "num_base_bdevs_operational": 4, 00:11:23.219 "base_bdevs_list": [ 00:11:23.219 { 00:11:23.219 "name": "BaseBdev1", 00:11:23.219 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:23.219 "is_configured": true, 00:11:23.219 "data_offset": 2048, 00:11:23.219 "data_size": 63488 00:11:23.219 }, 00:11:23.219 { 00:11:23.219 "name": "BaseBdev2", 00:11:23.219 "uuid": "578807d9-4a55-49b4-b439-937e2ff7ab48", 00:11:23.219 "is_configured": true, 00:11:23.219 "data_offset": 2048, 00:11:23.219 "data_size": 63488 00:11:23.219 }, 00:11:23.219 { 00:11:23.219 "name": "BaseBdev3", 00:11:23.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.219 "is_configured": false, 00:11:23.219 "data_offset": 0, 00:11:23.219 "data_size": 0 00:11:23.219 }, 00:11:23.219 { 00:11:23.219 "name": "BaseBdev4", 00:11:23.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.219 "is_configured": false, 00:11:23.219 "data_offset": 0, 00:11:23.219 "data_size": 0 00:11:23.219 } 00:11:23.219 ] 00:11:23.219 }' 00:11:23.219 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.219 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.479 [2024-11-18 10:39:49.340245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.479 BaseBdev3 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.479 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.774 [ 00:11:23.774 { 00:11:23.774 "name": "BaseBdev3", 00:11:23.774 "aliases": [ 00:11:23.774 "df9fac2f-16fd-4424-a3e5-a836cb346236" 00:11:23.774 ], 00:11:23.774 "product_name": "Malloc disk", 00:11:23.774 "block_size": 512, 00:11:23.774 "num_blocks": 65536, 00:11:23.774 "uuid": "df9fac2f-16fd-4424-a3e5-a836cb346236", 00:11:23.774 "assigned_rate_limits": { 00:11:23.774 "rw_ios_per_sec": 0, 00:11:23.774 "rw_mbytes_per_sec": 0, 00:11:23.774 "r_mbytes_per_sec": 0, 00:11:23.774 "w_mbytes_per_sec": 0 00:11:23.774 }, 00:11:23.774 "claimed": true, 00:11:23.774 "claim_type": "exclusive_write", 00:11:23.774 "zoned": false, 00:11:23.774 "supported_io_types": { 00:11:23.775 "read": true, 00:11:23.775 "write": true, 00:11:23.775 "unmap": true, 00:11:23.775 "flush": true, 00:11:23.775 "reset": true, 00:11:23.775 "nvme_admin": false, 00:11:23.775 "nvme_io": false, 00:11:23.775 "nvme_io_md": false, 00:11:23.775 "write_zeroes": true, 00:11:23.775 "zcopy": true, 00:11:23.775 "get_zone_info": false, 00:11:23.775 "zone_management": false, 00:11:23.775 "zone_append": false, 00:11:23.775 "compare": false, 00:11:23.775 "compare_and_write": false, 00:11:23.775 "abort": true, 00:11:23.775 "seek_hole": false, 00:11:23.775 "seek_data": false, 00:11:23.775 "copy": true, 00:11:23.775 "nvme_iov_md": false 00:11:23.775 }, 00:11:23.775 "memory_domains": [ 00:11:23.775 { 00:11:23.775 "dma_device_id": "system", 00:11:23.775 "dma_device_type": 1 00:11:23.775 }, 00:11:23.775 { 00:11:23.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.775 "dma_device_type": 2 00:11:23.775 } 00:11:23.775 ], 00:11:23.775 "driver_specific": {} 00:11:23.775 } 00:11:23.775 ] 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.775 "name": "Existed_Raid", 00:11:23.775 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:23.775 "strip_size_kb": 0, 00:11:23.775 "state": "configuring", 00:11:23.775 "raid_level": "raid1", 00:11:23.775 "superblock": true, 00:11:23.775 "num_base_bdevs": 4, 00:11:23.775 "num_base_bdevs_discovered": 3, 00:11:23.775 "num_base_bdevs_operational": 4, 00:11:23.775 "base_bdevs_list": [ 00:11:23.775 { 00:11:23.775 "name": "BaseBdev1", 00:11:23.775 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:23.775 "is_configured": true, 00:11:23.775 "data_offset": 2048, 00:11:23.775 "data_size": 63488 00:11:23.775 }, 00:11:23.775 { 00:11:23.775 "name": "BaseBdev2", 00:11:23.775 "uuid": "578807d9-4a55-49b4-b439-937e2ff7ab48", 00:11:23.775 "is_configured": true, 00:11:23.775 "data_offset": 2048, 00:11:23.775 "data_size": 63488 00:11:23.775 }, 00:11:23.775 { 00:11:23.775 "name": "BaseBdev3", 00:11:23.775 "uuid": "df9fac2f-16fd-4424-a3e5-a836cb346236", 00:11:23.775 "is_configured": true, 00:11:23.775 "data_offset": 2048, 00:11:23.775 "data_size": 63488 00:11:23.775 }, 00:11:23.775 { 00:11:23.775 "name": "BaseBdev4", 00:11:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.775 "is_configured": false, 00:11:23.775 "data_offset": 0, 00:11:23.775 "data_size": 0 00:11:23.775 } 00:11:23.775 ] 00:11:23.775 }' 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.775 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 [2024-11-18 10:39:49.845338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.036 [2024-11-18 10:39:49.845612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.036 [2024-11-18 10:39:49.845626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.036 [2024-11-18 10:39:49.845914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.036 BaseBdev4 00:11:24.036 [2024-11-18 10:39:49.846078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.036 [2024-11-18 10:39:49.846092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.036 [2024-11-18 10:39:49.846251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 [ 00:11:24.036 { 00:11:24.036 "name": "BaseBdev4", 00:11:24.036 "aliases": [ 00:11:24.036 "3d3a11e3-f9a7-4c0a-ba22-a51dbe6dfd1d" 00:11:24.036 ], 00:11:24.036 "product_name": "Malloc disk", 00:11:24.036 "block_size": 512, 00:11:24.036 "num_blocks": 65536, 00:11:24.036 "uuid": "3d3a11e3-f9a7-4c0a-ba22-a51dbe6dfd1d", 00:11:24.036 "assigned_rate_limits": { 00:11:24.036 "rw_ios_per_sec": 0, 00:11:24.036 "rw_mbytes_per_sec": 0, 00:11:24.036 "r_mbytes_per_sec": 0, 00:11:24.036 "w_mbytes_per_sec": 0 00:11:24.036 }, 00:11:24.036 "claimed": true, 00:11:24.036 "claim_type": "exclusive_write", 00:11:24.036 "zoned": false, 00:11:24.036 "supported_io_types": { 00:11:24.036 "read": true, 00:11:24.036 "write": true, 00:11:24.036 "unmap": true, 00:11:24.036 "flush": true, 00:11:24.036 "reset": true, 00:11:24.036 "nvme_admin": false, 00:11:24.036 "nvme_io": false, 00:11:24.036 "nvme_io_md": false, 00:11:24.036 "write_zeroes": true, 00:11:24.036 "zcopy": true, 00:11:24.036 "get_zone_info": false, 00:11:24.036 "zone_management": false, 00:11:24.036 "zone_append": false, 00:11:24.036 "compare": false, 00:11:24.036 "compare_and_write": false, 00:11:24.036 "abort": true, 00:11:24.036 "seek_hole": false, 00:11:24.036 "seek_data": false, 00:11:24.036 "copy": true, 00:11:24.036 "nvme_iov_md": false 00:11:24.036 }, 00:11:24.036 "memory_domains": [ 00:11:24.036 { 00:11:24.036 "dma_device_id": "system", 00:11:24.036 "dma_device_type": 1 00:11:24.036 }, 00:11:24.036 { 00:11:24.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.036 "dma_device_type": 2 00:11:24.036 } 00:11:24.036 ], 00:11:24.036 "driver_specific": {} 00:11:24.036 } 00:11:24.036 ] 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.036 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.295 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.295 "name": "Existed_Raid", 00:11:24.295 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:24.295 "strip_size_kb": 0, 00:11:24.295 "state": "online", 00:11:24.295 "raid_level": "raid1", 00:11:24.295 "superblock": true, 00:11:24.295 "num_base_bdevs": 4, 00:11:24.295 "num_base_bdevs_discovered": 4, 00:11:24.295 "num_base_bdevs_operational": 4, 00:11:24.295 "base_bdevs_list": [ 00:11:24.295 { 00:11:24.295 "name": "BaseBdev1", 00:11:24.295 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:24.295 "is_configured": true, 00:11:24.295 "data_offset": 2048, 00:11:24.295 "data_size": 63488 00:11:24.295 }, 00:11:24.295 { 00:11:24.295 "name": "BaseBdev2", 00:11:24.295 "uuid": "578807d9-4a55-49b4-b439-937e2ff7ab48", 00:11:24.295 "is_configured": true, 00:11:24.295 "data_offset": 2048, 00:11:24.295 "data_size": 63488 00:11:24.295 }, 00:11:24.295 { 00:11:24.295 "name": "BaseBdev3", 00:11:24.295 "uuid": "df9fac2f-16fd-4424-a3e5-a836cb346236", 00:11:24.295 "is_configured": true, 00:11:24.295 "data_offset": 2048, 00:11:24.295 "data_size": 63488 00:11:24.295 }, 00:11:24.295 { 00:11:24.295 "name": "BaseBdev4", 00:11:24.295 "uuid": "3d3a11e3-f9a7-4c0a-ba22-a51dbe6dfd1d", 00:11:24.295 "is_configured": true, 00:11:24.295 "data_offset": 2048, 00:11:24.295 "data_size": 63488 00:11:24.295 } 00:11:24.295 ] 00:11:24.295 }' 00:11:24.295 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.295 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.555 [2024-11-18 10:39:50.348776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.555 "name": "Existed_Raid", 00:11:24.555 "aliases": [ 00:11:24.555 "46ad7412-1883-441a-a830-b82d1dfb573e" 00:11:24.555 ], 00:11:24.555 "product_name": "Raid Volume", 00:11:24.555 "block_size": 512, 00:11:24.555 "num_blocks": 63488, 00:11:24.555 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:24.555 "assigned_rate_limits": { 00:11:24.555 "rw_ios_per_sec": 0, 00:11:24.555 "rw_mbytes_per_sec": 0, 00:11:24.555 "r_mbytes_per_sec": 0, 00:11:24.555 "w_mbytes_per_sec": 0 00:11:24.555 }, 00:11:24.555 "claimed": false, 00:11:24.555 "zoned": false, 00:11:24.555 "supported_io_types": { 00:11:24.555 "read": true, 00:11:24.555 "write": true, 00:11:24.555 "unmap": false, 00:11:24.555 "flush": false, 00:11:24.555 "reset": true, 00:11:24.555 "nvme_admin": false, 00:11:24.555 "nvme_io": false, 00:11:24.555 "nvme_io_md": false, 00:11:24.555 "write_zeroes": true, 00:11:24.555 "zcopy": false, 00:11:24.555 "get_zone_info": false, 00:11:24.555 "zone_management": false, 00:11:24.555 "zone_append": false, 00:11:24.555 "compare": false, 00:11:24.555 "compare_and_write": false, 00:11:24.555 "abort": false, 00:11:24.555 "seek_hole": false, 00:11:24.555 "seek_data": false, 00:11:24.555 "copy": false, 00:11:24.555 "nvme_iov_md": false 00:11:24.555 }, 00:11:24.555 "memory_domains": [ 00:11:24.555 { 00:11:24.555 "dma_device_id": "system", 00:11:24.555 "dma_device_type": 1 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.555 "dma_device_type": 2 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "system", 00:11:24.555 "dma_device_type": 1 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.555 "dma_device_type": 2 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "system", 00:11:24.555 "dma_device_type": 1 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.555 "dma_device_type": 2 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "system", 00:11:24.555 "dma_device_type": 1 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.555 "dma_device_type": 2 00:11:24.555 } 00:11:24.555 ], 00:11:24.555 "driver_specific": { 00:11:24.555 "raid": { 00:11:24.555 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:24.555 "strip_size_kb": 0, 00:11:24.555 "state": "online", 00:11:24.555 "raid_level": "raid1", 00:11:24.555 "superblock": true, 00:11:24.555 "num_base_bdevs": 4, 00:11:24.555 "num_base_bdevs_discovered": 4, 00:11:24.555 "num_base_bdevs_operational": 4, 00:11:24.555 "base_bdevs_list": [ 00:11:24.555 { 00:11:24.555 "name": "BaseBdev1", 00:11:24.555 "uuid": "42d2ce5a-3e87-4bfe-adff-43af16723a14", 00:11:24.555 "is_configured": true, 00:11:24.555 "data_offset": 2048, 00:11:24.555 "data_size": 63488 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "name": "BaseBdev2", 00:11:24.555 "uuid": "578807d9-4a55-49b4-b439-937e2ff7ab48", 00:11:24.555 "is_configured": true, 00:11:24.555 "data_offset": 2048, 00:11:24.555 "data_size": 63488 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "name": "BaseBdev3", 00:11:24.555 "uuid": "df9fac2f-16fd-4424-a3e5-a836cb346236", 00:11:24.555 "is_configured": true, 00:11:24.555 "data_offset": 2048, 00:11:24.555 "data_size": 63488 00:11:24.555 }, 00:11:24.555 { 00:11:24.555 "name": "BaseBdev4", 00:11:24.555 "uuid": "3d3a11e3-f9a7-4c0a-ba22-a51dbe6dfd1d", 00:11:24.555 "is_configured": true, 00:11:24.555 "data_offset": 2048, 00:11:24.555 "data_size": 63488 00:11:24.555 } 00:11:24.555 ] 00:11:24.555 } 00:11:24.555 } 00:11:24.555 }' 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.555 BaseBdev2 00:11:24.555 BaseBdev3 00:11:24.555 BaseBdev4' 00:11:24.555 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.815 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.815 [2024-11-18 10:39:50.652047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.075 "name": "Existed_Raid", 00:11:25.075 "uuid": "46ad7412-1883-441a-a830-b82d1dfb573e", 00:11:25.075 "strip_size_kb": 0, 00:11:25.075 "state": "online", 00:11:25.075 "raid_level": "raid1", 00:11:25.075 "superblock": true, 00:11:25.075 "num_base_bdevs": 4, 00:11:25.075 "num_base_bdevs_discovered": 3, 00:11:25.075 "num_base_bdevs_operational": 3, 00:11:25.075 "base_bdevs_list": [ 00:11:25.075 { 00:11:25.075 "name": null, 00:11:25.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.075 "is_configured": false, 00:11:25.075 "data_offset": 0, 00:11:25.075 "data_size": 63488 00:11:25.075 }, 00:11:25.075 { 00:11:25.075 "name": "BaseBdev2", 00:11:25.075 "uuid": "578807d9-4a55-49b4-b439-937e2ff7ab48", 00:11:25.075 "is_configured": true, 00:11:25.075 "data_offset": 2048, 00:11:25.075 "data_size": 63488 00:11:25.075 }, 00:11:25.075 { 00:11:25.075 "name": "BaseBdev3", 00:11:25.075 "uuid": "df9fac2f-16fd-4424-a3e5-a836cb346236", 00:11:25.075 "is_configured": true, 00:11:25.075 "data_offset": 2048, 00:11:25.075 "data_size": 63488 00:11:25.075 }, 00:11:25.075 { 00:11:25.075 "name": "BaseBdev4", 00:11:25.075 "uuid": "3d3a11e3-f9a7-4c0a-ba22-a51dbe6dfd1d", 00:11:25.075 "is_configured": true, 00:11:25.075 "data_offset": 2048, 00:11:25.075 "data_size": 63488 00:11:25.075 } 00:11:25.075 ] 00:11:25.075 }' 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.075 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.335 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.598 [2024-11-18 10:39:51.235393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.598 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.599 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.599 [2024-11-18 10:39:51.396628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.864 [2024-11-18 10:39:51.556921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:25.864 [2024-11-18 10:39:51.557119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.864 [2024-11-18 10:39:51.658083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.864 [2024-11-18 10:39:51.658139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.864 [2024-11-18 10:39:51.658152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.864 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.124 BaseBdev2 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.124 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 [ 00:11:26.125 { 00:11:26.125 "name": "BaseBdev2", 00:11:26.125 "aliases": [ 00:11:26.125 "adcd563c-86c6-4e76-8a2f-15baf16a52fb" 00:11:26.125 ], 00:11:26.125 "product_name": "Malloc disk", 00:11:26.125 "block_size": 512, 00:11:26.125 "num_blocks": 65536, 00:11:26.125 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:26.125 "assigned_rate_limits": { 00:11:26.125 "rw_ios_per_sec": 0, 00:11:26.125 "rw_mbytes_per_sec": 0, 00:11:26.125 "r_mbytes_per_sec": 0, 00:11:26.125 "w_mbytes_per_sec": 0 00:11:26.125 }, 00:11:26.125 "claimed": false, 00:11:26.125 "zoned": false, 00:11:26.125 "supported_io_types": { 00:11:26.125 "read": true, 00:11:26.125 "write": true, 00:11:26.125 "unmap": true, 00:11:26.125 "flush": true, 00:11:26.125 "reset": true, 00:11:26.125 "nvme_admin": false, 00:11:26.125 "nvme_io": false, 00:11:26.125 "nvme_io_md": false, 00:11:26.125 "write_zeroes": true, 00:11:26.125 "zcopy": true, 00:11:26.125 "get_zone_info": false, 00:11:26.125 "zone_management": false, 00:11:26.125 "zone_append": false, 00:11:26.125 "compare": false, 00:11:26.125 "compare_and_write": false, 00:11:26.125 "abort": true, 00:11:26.125 "seek_hole": false, 00:11:26.125 "seek_data": false, 00:11:26.125 "copy": true, 00:11:26.125 "nvme_iov_md": false 00:11:26.125 }, 00:11:26.125 "memory_domains": [ 00:11:26.125 { 00:11:26.125 "dma_device_id": "system", 00:11:26.125 "dma_device_type": 1 00:11:26.125 }, 00:11:26.125 { 00:11:26.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.125 "dma_device_type": 2 00:11:26.125 } 00:11:26.125 ], 00:11:26.125 "driver_specific": {} 00:11:26.125 } 00:11:26.125 ] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 BaseBdev3 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 [ 00:11:26.125 { 00:11:26.125 "name": "BaseBdev3", 00:11:26.125 "aliases": [ 00:11:26.125 "b0a3cf9f-a049-4370-820b-e3e52f5223e6" 00:11:26.125 ], 00:11:26.125 "product_name": "Malloc disk", 00:11:26.125 "block_size": 512, 00:11:26.125 "num_blocks": 65536, 00:11:26.125 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:26.125 "assigned_rate_limits": { 00:11:26.125 "rw_ios_per_sec": 0, 00:11:26.125 "rw_mbytes_per_sec": 0, 00:11:26.125 "r_mbytes_per_sec": 0, 00:11:26.125 "w_mbytes_per_sec": 0 00:11:26.125 }, 00:11:26.125 "claimed": false, 00:11:26.125 "zoned": false, 00:11:26.125 "supported_io_types": { 00:11:26.125 "read": true, 00:11:26.125 "write": true, 00:11:26.125 "unmap": true, 00:11:26.125 "flush": true, 00:11:26.125 "reset": true, 00:11:26.125 "nvme_admin": false, 00:11:26.125 "nvme_io": false, 00:11:26.125 "nvme_io_md": false, 00:11:26.125 "write_zeroes": true, 00:11:26.125 "zcopy": true, 00:11:26.125 "get_zone_info": false, 00:11:26.125 "zone_management": false, 00:11:26.125 "zone_append": false, 00:11:26.125 "compare": false, 00:11:26.125 "compare_and_write": false, 00:11:26.125 "abort": true, 00:11:26.125 "seek_hole": false, 00:11:26.125 "seek_data": false, 00:11:26.125 "copy": true, 00:11:26.125 "nvme_iov_md": false 00:11:26.125 }, 00:11:26.125 "memory_domains": [ 00:11:26.125 { 00:11:26.125 "dma_device_id": "system", 00:11:26.125 "dma_device_type": 1 00:11:26.125 }, 00:11:26.125 { 00:11:26.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.125 "dma_device_type": 2 00:11:26.125 } 00:11:26.125 ], 00:11:26.125 "driver_specific": {} 00:11:26.125 } 00:11:26.125 ] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 BaseBdev4 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.125 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.125 [ 00:11:26.125 { 00:11:26.125 "name": "BaseBdev4", 00:11:26.125 "aliases": [ 00:11:26.125 "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2" 00:11:26.125 ], 00:11:26.125 "product_name": "Malloc disk", 00:11:26.125 "block_size": 512, 00:11:26.125 "num_blocks": 65536, 00:11:26.125 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:26.125 "assigned_rate_limits": { 00:11:26.125 "rw_ios_per_sec": 0, 00:11:26.125 "rw_mbytes_per_sec": 0, 00:11:26.125 "r_mbytes_per_sec": 0, 00:11:26.125 "w_mbytes_per_sec": 0 00:11:26.125 }, 00:11:26.125 "claimed": false, 00:11:26.125 "zoned": false, 00:11:26.125 "supported_io_types": { 00:11:26.125 "read": true, 00:11:26.125 "write": true, 00:11:26.125 "unmap": true, 00:11:26.125 "flush": true, 00:11:26.125 "reset": true, 00:11:26.125 "nvme_admin": false, 00:11:26.125 "nvme_io": false, 00:11:26.125 "nvme_io_md": false, 00:11:26.125 "write_zeroes": true, 00:11:26.125 "zcopy": true, 00:11:26.125 "get_zone_info": false, 00:11:26.125 "zone_management": false, 00:11:26.125 "zone_append": false, 00:11:26.125 "compare": false, 00:11:26.125 "compare_and_write": false, 00:11:26.125 "abort": true, 00:11:26.125 "seek_hole": false, 00:11:26.125 "seek_data": false, 00:11:26.125 "copy": true, 00:11:26.125 "nvme_iov_md": false 00:11:26.125 }, 00:11:26.125 "memory_domains": [ 00:11:26.125 { 00:11:26.125 "dma_device_id": "system", 00:11:26.125 "dma_device_type": 1 00:11:26.126 }, 00:11:26.126 { 00:11:26.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.126 "dma_device_type": 2 00:11:26.126 } 00:11:26.126 ], 00:11:26.126 "driver_specific": {} 00:11:26.126 } 00:11:26.126 ] 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.126 [2024-11-18 10:39:51.967318] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.126 [2024-11-18 10:39:51.967442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.126 [2024-11-18 10:39:51.967482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.126 [2024-11-18 10:39:51.969448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.126 [2024-11-18 10:39:51.969490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.126 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.385 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.385 "name": "Existed_Raid", 00:11:26.385 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:26.385 "strip_size_kb": 0, 00:11:26.385 "state": "configuring", 00:11:26.385 "raid_level": "raid1", 00:11:26.385 "superblock": true, 00:11:26.385 "num_base_bdevs": 4, 00:11:26.385 "num_base_bdevs_discovered": 3, 00:11:26.385 "num_base_bdevs_operational": 4, 00:11:26.385 "base_bdevs_list": [ 00:11:26.385 { 00:11:26.385 "name": "BaseBdev1", 00:11:26.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.385 "is_configured": false, 00:11:26.385 "data_offset": 0, 00:11:26.385 "data_size": 0 00:11:26.385 }, 00:11:26.385 { 00:11:26.385 "name": "BaseBdev2", 00:11:26.385 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:26.385 "is_configured": true, 00:11:26.385 "data_offset": 2048, 00:11:26.385 "data_size": 63488 00:11:26.386 }, 00:11:26.386 { 00:11:26.386 "name": "BaseBdev3", 00:11:26.386 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:26.386 "is_configured": true, 00:11:26.386 "data_offset": 2048, 00:11:26.386 "data_size": 63488 00:11:26.386 }, 00:11:26.386 { 00:11:26.386 "name": "BaseBdev4", 00:11:26.386 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:26.386 "is_configured": true, 00:11:26.386 "data_offset": 2048, 00:11:26.386 "data_size": 63488 00:11:26.386 } 00:11:26.386 ] 00:11:26.386 }' 00:11:26.386 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.386 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.645 [2024-11-18 10:39:52.410641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.645 "name": "Existed_Raid", 00:11:26.645 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:26.645 "strip_size_kb": 0, 00:11:26.645 "state": "configuring", 00:11:26.645 "raid_level": "raid1", 00:11:26.645 "superblock": true, 00:11:26.645 "num_base_bdevs": 4, 00:11:26.645 "num_base_bdevs_discovered": 2, 00:11:26.645 "num_base_bdevs_operational": 4, 00:11:26.645 "base_bdevs_list": [ 00:11:26.645 { 00:11:26.645 "name": "BaseBdev1", 00:11:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.645 "is_configured": false, 00:11:26.645 "data_offset": 0, 00:11:26.645 "data_size": 0 00:11:26.645 }, 00:11:26.645 { 00:11:26.645 "name": null, 00:11:26.645 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:26.645 "is_configured": false, 00:11:26.645 "data_offset": 0, 00:11:26.645 "data_size": 63488 00:11:26.645 }, 00:11:26.645 { 00:11:26.645 "name": "BaseBdev3", 00:11:26.645 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:26.645 "is_configured": true, 00:11:26.645 "data_offset": 2048, 00:11:26.645 "data_size": 63488 00:11:26.645 }, 00:11:26.645 { 00:11:26.645 "name": "BaseBdev4", 00:11:26.645 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:26.645 "is_configured": true, 00:11:26.645 "data_offset": 2048, 00:11:26.645 "data_size": 63488 00:11:26.645 } 00:11:26.645 ] 00:11:26.645 }' 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.645 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.213 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.213 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 [2024-11-18 10:39:52.914746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.214 BaseBdev1 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 [ 00:11:27.214 { 00:11:27.214 "name": "BaseBdev1", 00:11:27.214 "aliases": [ 00:11:27.214 "7b1ead48-cad2-47f6-8557-d3c35ac3c929" 00:11:27.214 ], 00:11:27.214 "product_name": "Malloc disk", 00:11:27.214 "block_size": 512, 00:11:27.214 "num_blocks": 65536, 00:11:27.214 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:27.214 "assigned_rate_limits": { 00:11:27.214 "rw_ios_per_sec": 0, 00:11:27.214 "rw_mbytes_per_sec": 0, 00:11:27.214 "r_mbytes_per_sec": 0, 00:11:27.214 "w_mbytes_per_sec": 0 00:11:27.214 }, 00:11:27.214 "claimed": true, 00:11:27.214 "claim_type": "exclusive_write", 00:11:27.214 "zoned": false, 00:11:27.214 "supported_io_types": { 00:11:27.214 "read": true, 00:11:27.214 "write": true, 00:11:27.214 "unmap": true, 00:11:27.214 "flush": true, 00:11:27.214 "reset": true, 00:11:27.214 "nvme_admin": false, 00:11:27.214 "nvme_io": false, 00:11:27.214 "nvme_io_md": false, 00:11:27.214 "write_zeroes": true, 00:11:27.214 "zcopy": true, 00:11:27.214 "get_zone_info": false, 00:11:27.214 "zone_management": false, 00:11:27.214 "zone_append": false, 00:11:27.214 "compare": false, 00:11:27.214 "compare_and_write": false, 00:11:27.214 "abort": true, 00:11:27.214 "seek_hole": false, 00:11:27.214 "seek_data": false, 00:11:27.214 "copy": true, 00:11:27.214 "nvme_iov_md": false 00:11:27.214 }, 00:11:27.214 "memory_domains": [ 00:11:27.214 { 00:11:27.214 "dma_device_id": "system", 00:11:27.214 "dma_device_type": 1 00:11:27.214 }, 00:11:27.214 { 00:11:27.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.214 "dma_device_type": 2 00:11:27.214 } 00:11:27.214 ], 00:11:27.214 "driver_specific": {} 00:11:27.214 } 00:11:27.214 ] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.214 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.214 "name": "Existed_Raid", 00:11:27.214 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:27.214 "strip_size_kb": 0, 00:11:27.214 "state": "configuring", 00:11:27.214 "raid_level": "raid1", 00:11:27.214 "superblock": true, 00:11:27.214 "num_base_bdevs": 4, 00:11:27.214 "num_base_bdevs_discovered": 3, 00:11:27.214 "num_base_bdevs_operational": 4, 00:11:27.214 "base_bdevs_list": [ 00:11:27.214 { 00:11:27.214 "name": "BaseBdev1", 00:11:27.214 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:27.214 "is_configured": true, 00:11:27.214 "data_offset": 2048, 00:11:27.214 "data_size": 63488 00:11:27.214 }, 00:11:27.214 { 00:11:27.214 "name": null, 00:11:27.214 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:27.214 "is_configured": false, 00:11:27.214 "data_offset": 0, 00:11:27.214 "data_size": 63488 00:11:27.214 }, 00:11:27.214 { 00:11:27.214 "name": "BaseBdev3", 00:11:27.214 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:27.214 "is_configured": true, 00:11:27.214 "data_offset": 2048, 00:11:27.214 "data_size": 63488 00:11:27.214 }, 00:11:27.214 { 00:11:27.214 "name": "BaseBdev4", 00:11:27.214 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:27.214 "is_configured": true, 00:11:27.214 "data_offset": 2048, 00:11:27.214 "data_size": 63488 00:11:27.214 } 00:11:27.214 ] 00:11:27.214 }' 00:11:27.214 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.214 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.784 [2024-11-18 10:39:53.402075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.784 "name": "Existed_Raid", 00:11:27.784 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:27.784 "strip_size_kb": 0, 00:11:27.784 "state": "configuring", 00:11:27.784 "raid_level": "raid1", 00:11:27.784 "superblock": true, 00:11:27.784 "num_base_bdevs": 4, 00:11:27.784 "num_base_bdevs_discovered": 2, 00:11:27.784 "num_base_bdevs_operational": 4, 00:11:27.784 "base_bdevs_list": [ 00:11:27.784 { 00:11:27.784 "name": "BaseBdev1", 00:11:27.784 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:27.784 "is_configured": true, 00:11:27.784 "data_offset": 2048, 00:11:27.784 "data_size": 63488 00:11:27.784 }, 00:11:27.784 { 00:11:27.784 "name": null, 00:11:27.784 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:27.784 "is_configured": false, 00:11:27.784 "data_offset": 0, 00:11:27.784 "data_size": 63488 00:11:27.784 }, 00:11:27.784 { 00:11:27.784 "name": null, 00:11:27.784 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:27.784 "is_configured": false, 00:11:27.784 "data_offset": 0, 00:11:27.784 "data_size": 63488 00:11:27.784 }, 00:11:27.784 { 00:11:27.784 "name": "BaseBdev4", 00:11:27.784 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:27.784 "is_configured": true, 00:11:27.784 "data_offset": 2048, 00:11:27.784 "data_size": 63488 00:11:27.784 } 00:11:27.784 ] 00:11:27.784 }' 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.784 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 [2024-11-18 10:39:53.845322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.044 "name": "Existed_Raid", 00:11:28.044 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:28.044 "strip_size_kb": 0, 00:11:28.044 "state": "configuring", 00:11:28.044 "raid_level": "raid1", 00:11:28.044 "superblock": true, 00:11:28.044 "num_base_bdevs": 4, 00:11:28.044 "num_base_bdevs_discovered": 3, 00:11:28.044 "num_base_bdevs_operational": 4, 00:11:28.044 "base_bdevs_list": [ 00:11:28.044 { 00:11:28.044 "name": "BaseBdev1", 00:11:28.044 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 2048, 00:11:28.044 "data_size": 63488 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": null, 00:11:28.044 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:28.044 "is_configured": false, 00:11:28.044 "data_offset": 0, 00:11:28.044 "data_size": 63488 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": "BaseBdev3", 00:11:28.044 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 2048, 00:11:28.044 "data_size": 63488 00:11:28.044 }, 00:11:28.044 { 00:11:28.044 "name": "BaseBdev4", 00:11:28.044 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:28.044 "is_configured": true, 00:11:28.044 "data_offset": 2048, 00:11:28.044 "data_size": 63488 00:11:28.044 } 00:11:28.044 ] 00:11:28.044 }' 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.044 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.614 [2024-11-18 10:39:54.280560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.614 "name": "Existed_Raid", 00:11:28.614 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:28.614 "strip_size_kb": 0, 00:11:28.614 "state": "configuring", 00:11:28.614 "raid_level": "raid1", 00:11:28.614 "superblock": true, 00:11:28.614 "num_base_bdevs": 4, 00:11:28.614 "num_base_bdevs_discovered": 2, 00:11:28.614 "num_base_bdevs_operational": 4, 00:11:28.614 "base_bdevs_list": [ 00:11:28.614 { 00:11:28.614 "name": null, 00:11:28.614 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:28.614 "is_configured": false, 00:11:28.614 "data_offset": 0, 00:11:28.614 "data_size": 63488 00:11:28.614 }, 00:11:28.614 { 00:11:28.614 "name": null, 00:11:28.614 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:28.614 "is_configured": false, 00:11:28.614 "data_offset": 0, 00:11:28.614 "data_size": 63488 00:11:28.614 }, 00:11:28.614 { 00:11:28.614 "name": "BaseBdev3", 00:11:28.614 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:28.614 "is_configured": true, 00:11:28.614 "data_offset": 2048, 00:11:28.614 "data_size": 63488 00:11:28.614 }, 00:11:28.614 { 00:11:28.614 "name": "BaseBdev4", 00:11:28.614 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:28.614 "is_configured": true, 00:11:28.614 "data_offset": 2048, 00:11:28.614 "data_size": 63488 00:11:28.614 } 00:11:28.614 ] 00:11:28.614 }' 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.614 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.184 [2024-11-18 10:39:54.829284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.184 "name": "Existed_Raid", 00:11:29.184 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:29.184 "strip_size_kb": 0, 00:11:29.184 "state": "configuring", 00:11:29.184 "raid_level": "raid1", 00:11:29.184 "superblock": true, 00:11:29.184 "num_base_bdevs": 4, 00:11:29.184 "num_base_bdevs_discovered": 3, 00:11:29.184 "num_base_bdevs_operational": 4, 00:11:29.184 "base_bdevs_list": [ 00:11:29.184 { 00:11:29.184 "name": null, 00:11:29.184 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:29.184 "is_configured": false, 00:11:29.184 "data_offset": 0, 00:11:29.184 "data_size": 63488 00:11:29.184 }, 00:11:29.184 { 00:11:29.184 "name": "BaseBdev2", 00:11:29.184 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:29.184 "is_configured": true, 00:11:29.184 "data_offset": 2048, 00:11:29.184 "data_size": 63488 00:11:29.184 }, 00:11:29.184 { 00:11:29.184 "name": "BaseBdev3", 00:11:29.184 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:29.184 "is_configured": true, 00:11:29.184 "data_offset": 2048, 00:11:29.184 "data_size": 63488 00:11:29.184 }, 00:11:29.184 { 00:11:29.184 "name": "BaseBdev4", 00:11:29.184 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:29.184 "is_configured": true, 00:11:29.184 "data_offset": 2048, 00:11:29.184 "data_size": 63488 00:11:29.184 } 00:11:29.184 ] 00:11:29.184 }' 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.184 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7b1ead48-cad2-47f6-8557-d3c35ac3c929 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.444 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.444 [2024-11-18 10:39:55.325612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:29.444 [2024-11-18 10:39:55.325860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:29.444 [2024-11-18 10:39:55.325879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.444 [2024-11-18 10:39:55.326187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:29.444 [2024-11-18 10:39:55.326364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:29.703 NewBaseBdev 00:11:29.703 [2024-11-18 10:39:55.326416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:29.703 [2024-11-18 10:39:55.326591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.703 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.703 [ 00:11:29.703 { 00:11:29.703 "name": "NewBaseBdev", 00:11:29.703 "aliases": [ 00:11:29.703 "7b1ead48-cad2-47f6-8557-d3c35ac3c929" 00:11:29.703 ], 00:11:29.703 "product_name": "Malloc disk", 00:11:29.703 "block_size": 512, 00:11:29.703 "num_blocks": 65536, 00:11:29.703 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:29.703 "assigned_rate_limits": { 00:11:29.703 "rw_ios_per_sec": 0, 00:11:29.703 "rw_mbytes_per_sec": 0, 00:11:29.703 "r_mbytes_per_sec": 0, 00:11:29.703 "w_mbytes_per_sec": 0 00:11:29.703 }, 00:11:29.703 "claimed": true, 00:11:29.703 "claim_type": "exclusive_write", 00:11:29.703 "zoned": false, 00:11:29.703 "supported_io_types": { 00:11:29.703 "read": true, 00:11:29.703 "write": true, 00:11:29.703 "unmap": true, 00:11:29.703 "flush": true, 00:11:29.703 "reset": true, 00:11:29.703 "nvme_admin": false, 00:11:29.703 "nvme_io": false, 00:11:29.703 "nvme_io_md": false, 00:11:29.703 "write_zeroes": true, 00:11:29.703 "zcopy": true, 00:11:29.703 "get_zone_info": false, 00:11:29.703 "zone_management": false, 00:11:29.703 "zone_append": false, 00:11:29.703 "compare": false, 00:11:29.703 "compare_and_write": false, 00:11:29.703 "abort": true, 00:11:29.703 "seek_hole": false, 00:11:29.703 "seek_data": false, 00:11:29.703 "copy": true, 00:11:29.703 "nvme_iov_md": false 00:11:29.703 }, 00:11:29.703 "memory_domains": [ 00:11:29.703 { 00:11:29.704 "dma_device_id": "system", 00:11:29.704 "dma_device_type": 1 00:11:29.704 }, 00:11:29.704 { 00:11:29.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.704 "dma_device_type": 2 00:11:29.704 } 00:11:29.704 ], 00:11:29.704 "driver_specific": {} 00:11:29.704 } 00:11:29.704 ] 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.704 "name": "Existed_Raid", 00:11:29.704 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:29.704 "strip_size_kb": 0, 00:11:29.704 "state": "online", 00:11:29.704 "raid_level": "raid1", 00:11:29.704 "superblock": true, 00:11:29.704 "num_base_bdevs": 4, 00:11:29.704 "num_base_bdevs_discovered": 4, 00:11:29.704 "num_base_bdevs_operational": 4, 00:11:29.704 "base_bdevs_list": [ 00:11:29.704 { 00:11:29.704 "name": "NewBaseBdev", 00:11:29.704 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:29.704 "is_configured": true, 00:11:29.704 "data_offset": 2048, 00:11:29.704 "data_size": 63488 00:11:29.704 }, 00:11:29.704 { 00:11:29.704 "name": "BaseBdev2", 00:11:29.704 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:29.704 "is_configured": true, 00:11:29.704 "data_offset": 2048, 00:11:29.704 "data_size": 63488 00:11:29.704 }, 00:11:29.704 { 00:11:29.704 "name": "BaseBdev3", 00:11:29.704 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:29.704 "is_configured": true, 00:11:29.704 "data_offset": 2048, 00:11:29.704 "data_size": 63488 00:11:29.704 }, 00:11:29.704 { 00:11:29.704 "name": "BaseBdev4", 00:11:29.704 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:29.704 "is_configured": true, 00:11:29.704 "data_offset": 2048, 00:11:29.704 "data_size": 63488 00:11:29.704 } 00:11:29.704 ] 00:11:29.704 }' 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.704 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.963 [2024-11-18 10:39:55.805131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.963 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.964 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.964 "name": "Existed_Raid", 00:11:29.964 "aliases": [ 00:11:29.964 "80da9e10-aee3-4685-b3d8-f5fef837279e" 00:11:29.964 ], 00:11:29.964 "product_name": "Raid Volume", 00:11:29.964 "block_size": 512, 00:11:29.964 "num_blocks": 63488, 00:11:29.964 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:29.964 "assigned_rate_limits": { 00:11:29.964 "rw_ios_per_sec": 0, 00:11:29.964 "rw_mbytes_per_sec": 0, 00:11:29.964 "r_mbytes_per_sec": 0, 00:11:29.964 "w_mbytes_per_sec": 0 00:11:29.964 }, 00:11:29.964 "claimed": false, 00:11:29.964 "zoned": false, 00:11:29.964 "supported_io_types": { 00:11:29.964 "read": true, 00:11:29.964 "write": true, 00:11:29.964 "unmap": false, 00:11:29.964 "flush": false, 00:11:29.964 "reset": true, 00:11:29.964 "nvme_admin": false, 00:11:29.964 "nvme_io": false, 00:11:29.964 "nvme_io_md": false, 00:11:29.964 "write_zeroes": true, 00:11:29.964 "zcopy": false, 00:11:29.964 "get_zone_info": false, 00:11:29.964 "zone_management": false, 00:11:29.964 "zone_append": false, 00:11:29.964 "compare": false, 00:11:29.964 "compare_and_write": false, 00:11:29.964 "abort": false, 00:11:29.964 "seek_hole": false, 00:11:29.964 "seek_data": false, 00:11:29.964 "copy": false, 00:11:29.964 "nvme_iov_md": false 00:11:29.964 }, 00:11:29.964 "memory_domains": [ 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 } 00:11:29.964 ], 00:11:29.964 "driver_specific": { 00:11:29.964 "raid": { 00:11:29.964 "uuid": "80da9e10-aee3-4685-b3d8-f5fef837279e", 00:11:29.964 "strip_size_kb": 0, 00:11:29.964 "state": "online", 00:11:29.964 "raid_level": "raid1", 00:11:29.964 "superblock": true, 00:11:29.964 "num_base_bdevs": 4, 00:11:29.964 "num_base_bdevs_discovered": 4, 00:11:29.964 "num_base_bdevs_operational": 4, 00:11:29.964 "base_bdevs_list": [ 00:11:29.964 { 00:11:29.964 "name": "NewBaseBdev", 00:11:29.964 "uuid": "7b1ead48-cad2-47f6-8557-d3c35ac3c929", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev2", 00:11:29.964 "uuid": "adcd563c-86c6-4e76-8a2f-15baf16a52fb", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev3", 00:11:29.964 "uuid": "b0a3cf9f-a049-4370-820b-e3e52f5223e6", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev4", 00:11:29.964 "uuid": "b1ce375f-ef0a-49d4-b12b-0f1ff96f07e2", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 } 00:11:29.964 ] 00:11:29.964 } 00:11:29.964 } 00:11:29.964 }' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.224 BaseBdev2 00:11:30.224 BaseBdev3 00:11:30.224 BaseBdev4' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.224 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.484 [2024-11-18 10:39:56.132269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.484 [2024-11-18 10:39:56.132305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.484 [2024-11-18 10:39:56.132378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.484 [2024-11-18 10:39:56.132666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.484 [2024-11-18 10:39:56.132678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73703 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73703 ']' 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73703 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73703 00:11:30.484 killing process with pid 73703 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73703' 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73703 00:11:30.484 [2024-11-18 10:39:56.182294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.484 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73703 00:11:30.744 [2024-11-18 10:39:56.598461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.124 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.124 00:11:32.124 real 0m11.414s 00:11:32.124 user 0m17.847s 00:11:32.124 sys 0m2.061s 00:11:32.124 ************************************ 00:11:32.124 END TEST raid_state_function_test_sb 00:11:32.124 ************************************ 00:11:32.124 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.124 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.124 10:39:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:32.124 10:39:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.124 10:39:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.124 10:39:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.124 ************************************ 00:11:32.124 START TEST raid_superblock_test 00:11:32.124 ************************************ 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74374 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74374 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74374 ']' 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.124 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.124 [2024-11-18 10:39:57.924675] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:32.124 [2024-11-18 10:39:57.924858] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74374 ] 00:11:32.384 [2024-11-18 10:39:58.102119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.384 [2024-11-18 10:39:58.231176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.643 [2024-11-18 10:39:58.453430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.643 [2024-11-18 10:39:58.453565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.902 malloc1 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.902 [2024-11-18 10:39:58.769159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.902 [2024-11-18 10:39:58.769319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.902 [2024-11-18 10:39:58.769378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:32.902 [2024-11-18 10:39:58.769419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.902 [2024-11-18 10:39:58.771818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.902 [2024-11-18 10:39:58.771893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.902 pt1 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.902 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.903 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 malloc2 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 [2024-11-18 10:39:58.832992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.163 [2024-11-18 10:39:58.833049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.163 [2024-11-18 10:39:58.833071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.163 [2024-11-18 10:39:58.833081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.163 [2024-11-18 10:39:58.835430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.163 [2024-11-18 10:39:58.835465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.163 pt2 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 malloc3 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 [2024-11-18 10:39:58.925439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.163 [2024-11-18 10:39:58.925554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.163 [2024-11-18 10:39:58.925593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:33.163 [2024-11-18 10:39:58.925621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.163 [2024-11-18 10:39:58.927870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.163 [2024-11-18 10:39:58.927941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.163 pt3 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 malloc4 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 [2024-11-18 10:39:58.989323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.163 [2024-11-18 10:39:58.989414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.163 [2024-11-18 10:39:58.989448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.163 [2024-11-18 10:39:58.989472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.163 [2024-11-18 10:39:58.991714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.163 [2024-11-18 10:39:58.991784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.163 pt4 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 [2024-11-18 10:39:59.001339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.163 [2024-11-18 10:39:59.003411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.163 [2024-11-18 10:39:59.003510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.163 [2024-11-18 10:39:59.003586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.163 [2024-11-18 10:39:59.003806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:33.163 [2024-11-18 10:39:59.003856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.163 [2024-11-18 10:39:59.004139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.163 [2024-11-18 10:39:59.004406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:33.163 [2024-11-18 10:39:59.004455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:33.163 [2024-11-18 10:39:59.004635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.163 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.425 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.425 "name": "raid_bdev1", 00:11:33.425 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:33.425 "strip_size_kb": 0, 00:11:33.425 "state": "online", 00:11:33.425 "raid_level": "raid1", 00:11:33.425 "superblock": true, 00:11:33.425 "num_base_bdevs": 4, 00:11:33.425 "num_base_bdevs_discovered": 4, 00:11:33.425 "num_base_bdevs_operational": 4, 00:11:33.425 "base_bdevs_list": [ 00:11:33.425 { 00:11:33.425 "name": "pt1", 00:11:33.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.425 "is_configured": true, 00:11:33.426 "data_offset": 2048, 00:11:33.426 "data_size": 63488 00:11:33.426 }, 00:11:33.426 { 00:11:33.426 "name": "pt2", 00:11:33.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.426 "is_configured": true, 00:11:33.426 "data_offset": 2048, 00:11:33.426 "data_size": 63488 00:11:33.426 }, 00:11:33.426 { 00:11:33.426 "name": "pt3", 00:11:33.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.426 "is_configured": true, 00:11:33.426 "data_offset": 2048, 00:11:33.426 "data_size": 63488 00:11:33.426 }, 00:11:33.426 { 00:11:33.426 "name": "pt4", 00:11:33.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.426 "is_configured": true, 00:11:33.426 "data_offset": 2048, 00:11:33.426 "data_size": 63488 00:11:33.426 } 00:11:33.426 ] 00:11:33.426 }' 00:11:33.426 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.426 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.685 [2024-11-18 10:39:59.460814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.685 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.685 "name": "raid_bdev1", 00:11:33.685 "aliases": [ 00:11:33.685 "4aaa4844-2f1a-4418-a821-5c68c670debd" 00:11:33.685 ], 00:11:33.685 "product_name": "Raid Volume", 00:11:33.685 "block_size": 512, 00:11:33.685 "num_blocks": 63488, 00:11:33.685 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:33.685 "assigned_rate_limits": { 00:11:33.685 "rw_ios_per_sec": 0, 00:11:33.685 "rw_mbytes_per_sec": 0, 00:11:33.685 "r_mbytes_per_sec": 0, 00:11:33.685 "w_mbytes_per_sec": 0 00:11:33.685 }, 00:11:33.685 "claimed": false, 00:11:33.685 "zoned": false, 00:11:33.685 "supported_io_types": { 00:11:33.685 "read": true, 00:11:33.685 "write": true, 00:11:33.685 "unmap": false, 00:11:33.685 "flush": false, 00:11:33.685 "reset": true, 00:11:33.685 "nvme_admin": false, 00:11:33.685 "nvme_io": false, 00:11:33.685 "nvme_io_md": false, 00:11:33.685 "write_zeroes": true, 00:11:33.685 "zcopy": false, 00:11:33.685 "get_zone_info": false, 00:11:33.685 "zone_management": false, 00:11:33.685 "zone_append": false, 00:11:33.685 "compare": false, 00:11:33.685 "compare_and_write": false, 00:11:33.685 "abort": false, 00:11:33.685 "seek_hole": false, 00:11:33.685 "seek_data": false, 00:11:33.685 "copy": false, 00:11:33.685 "nvme_iov_md": false 00:11:33.685 }, 00:11:33.685 "memory_domains": [ 00:11:33.685 { 00:11:33.685 "dma_device_id": "system", 00:11:33.685 "dma_device_type": 1 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.685 "dma_device_type": 2 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "system", 00:11:33.685 "dma_device_type": 1 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.685 "dma_device_type": 2 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "system", 00:11:33.685 "dma_device_type": 1 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.685 "dma_device_type": 2 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "system", 00:11:33.685 "dma_device_type": 1 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.685 "dma_device_type": 2 00:11:33.685 } 00:11:33.685 ], 00:11:33.685 "driver_specific": { 00:11:33.685 "raid": { 00:11:33.685 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:33.685 "strip_size_kb": 0, 00:11:33.685 "state": "online", 00:11:33.685 "raid_level": "raid1", 00:11:33.685 "superblock": true, 00:11:33.685 "num_base_bdevs": 4, 00:11:33.685 "num_base_bdevs_discovered": 4, 00:11:33.685 "num_base_bdevs_operational": 4, 00:11:33.685 "base_bdevs_list": [ 00:11:33.685 { 00:11:33.685 "name": "pt1", 00:11:33.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.685 "is_configured": true, 00:11:33.685 "data_offset": 2048, 00:11:33.685 "data_size": 63488 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "name": "pt2", 00:11:33.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.685 "is_configured": true, 00:11:33.685 "data_offset": 2048, 00:11:33.685 "data_size": 63488 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "name": "pt3", 00:11:33.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.685 "is_configured": true, 00:11:33.685 "data_offset": 2048, 00:11:33.685 "data_size": 63488 00:11:33.685 }, 00:11:33.685 { 00:11:33.685 "name": "pt4", 00:11:33.685 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.685 "is_configured": true, 00:11:33.685 "data_offset": 2048, 00:11:33.685 "data_size": 63488 00:11:33.685 } 00:11:33.685 ] 00:11:33.685 } 00:11:33.685 } 00:11:33.685 }' 00:11:33.686 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.686 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:33.686 pt2 00:11:33.686 pt3 00:11:33.686 pt4' 00:11:33.686 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.686 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.686 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 [2024-11-18 10:39:59.756338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4aaa4844-2f1a-4418-a821-5c68c670debd 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4aaa4844-2f1a-4418-a821-5c68c670debd ']' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 [2024-11-18 10:39:59.787994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.946 [2024-11-18 10:39:59.788017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.946 [2024-11-18 10:39:59.788089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.946 [2024-11-18 10:39:59.788185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.946 [2024-11-18 10:39:59.788221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.207 [2024-11-18 10:39:59.931763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:34.207 [2024-11-18 10:39:59.933849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:34.207 [2024-11-18 10:39:59.933934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:34.207 [2024-11-18 10:39:59.933985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:34.207 [2024-11-18 10:39:59.934058] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:34.207 [2024-11-18 10:39:59.934128] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:34.207 [2024-11-18 10:39:59.934180] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:34.207 [2024-11-18 10:39:59.934233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:34.207 [2024-11-18 10:39:59.934275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.207 [2024-11-18 10:39:59.934333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:34.207 request: 00:11:34.207 { 00:11:34.207 "name": "raid_bdev1", 00:11:34.207 "raid_level": "raid1", 00:11:34.207 "base_bdevs": [ 00:11:34.207 "malloc1", 00:11:34.207 "malloc2", 00:11:34.207 "malloc3", 00:11:34.207 "malloc4" 00:11:34.207 ], 00:11:34.207 "superblock": false, 00:11:34.207 "method": "bdev_raid_create", 00:11:34.207 "req_id": 1 00:11:34.207 } 00:11:34.207 Got JSON-RPC error response 00:11:34.207 response: 00:11:34.207 { 00:11:34.207 "code": -17, 00:11:34.207 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:34.207 } 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:34.207 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.208 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.208 [2024-11-18 10:39:59.999628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:34.208 [2024-11-18 10:39:59.999714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.208 [2024-11-18 10:39:59.999732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.208 [2024-11-18 10:39:59.999743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.208 [2024-11-18 10:40:00.002020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.208 [2024-11-18 10:40:00.002058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:34.208 [2024-11-18 10:40:00.002121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:34.208 [2024-11-18 10:40:00.002194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:34.208 pt1 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.208 "name": "raid_bdev1", 00:11:34.208 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:34.208 "strip_size_kb": 0, 00:11:34.208 "state": "configuring", 00:11:34.208 "raid_level": "raid1", 00:11:34.208 "superblock": true, 00:11:34.208 "num_base_bdevs": 4, 00:11:34.208 "num_base_bdevs_discovered": 1, 00:11:34.208 "num_base_bdevs_operational": 4, 00:11:34.208 "base_bdevs_list": [ 00:11:34.208 { 00:11:34.208 "name": "pt1", 00:11:34.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.208 "is_configured": true, 00:11:34.208 "data_offset": 2048, 00:11:34.208 "data_size": 63488 00:11:34.208 }, 00:11:34.208 { 00:11:34.208 "name": null, 00:11:34.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.208 "is_configured": false, 00:11:34.208 "data_offset": 2048, 00:11:34.208 "data_size": 63488 00:11:34.208 }, 00:11:34.208 { 00:11:34.208 "name": null, 00:11:34.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.208 "is_configured": false, 00:11:34.208 "data_offset": 2048, 00:11:34.208 "data_size": 63488 00:11:34.208 }, 00:11:34.208 { 00:11:34.208 "name": null, 00:11:34.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.208 "is_configured": false, 00:11:34.208 "data_offset": 2048, 00:11:34.208 "data_size": 63488 00:11:34.208 } 00:11:34.208 ] 00:11:34.208 }' 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.208 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 [2024-11-18 10:40:00.466933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.778 [2024-11-18 10:40:00.467029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.778 [2024-11-18 10:40:00.467064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:34.778 [2024-11-18 10:40:00.467094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.778 [2024-11-18 10:40:00.467532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.778 [2024-11-18 10:40:00.467590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.778 [2024-11-18 10:40:00.467682] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.778 [2024-11-18 10:40:00.467740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.778 pt2 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 [2024-11-18 10:40:00.478896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.778 "name": "raid_bdev1", 00:11:34.778 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:34.778 "strip_size_kb": 0, 00:11:34.778 "state": "configuring", 00:11:34.778 "raid_level": "raid1", 00:11:34.778 "superblock": true, 00:11:34.778 "num_base_bdevs": 4, 00:11:34.778 "num_base_bdevs_discovered": 1, 00:11:34.778 "num_base_bdevs_operational": 4, 00:11:34.778 "base_bdevs_list": [ 00:11:34.778 { 00:11:34.778 "name": "pt1", 00:11:34.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.778 "is_configured": true, 00:11:34.778 "data_offset": 2048, 00:11:34.778 "data_size": 63488 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "name": null, 00:11:34.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.778 "is_configured": false, 00:11:34.778 "data_offset": 0, 00:11:34.778 "data_size": 63488 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "name": null, 00:11:34.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.778 "is_configured": false, 00:11:34.778 "data_offset": 2048, 00:11:34.778 "data_size": 63488 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "name": null, 00:11:34.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.778 "is_configured": false, 00:11:34.778 "data_offset": 2048, 00:11:34.778 "data_size": 63488 00:11:34.778 } 00:11:34.778 ] 00:11:34.778 }' 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.778 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 [2024-11-18 10:40:00.870222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.038 [2024-11-18 10:40:00.870262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.038 [2024-11-18 10:40:00.870285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:35.038 [2024-11-18 10:40:00.870294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.038 [2024-11-18 10:40:00.870665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.038 [2024-11-18 10:40:00.870680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.038 [2024-11-18 10:40:00.870739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.038 [2024-11-18 10:40:00.870756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.038 pt2 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 [2024-11-18 10:40:00.878212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.038 [2024-11-18 10:40:00.878253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.038 [2024-11-18 10:40:00.878268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:35.038 [2024-11-18 10:40:00.878275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.038 [2024-11-18 10:40:00.878609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.038 [2024-11-18 10:40:00.878623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.038 [2024-11-18 10:40:00.878674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:35.038 [2024-11-18 10:40:00.878689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.038 pt3 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 [2024-11-18 10:40:00.886167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:35.038 [2024-11-18 10:40:00.886234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.038 [2024-11-18 10:40:00.886249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:35.038 [2024-11-18 10:40:00.886256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.038 [2024-11-18 10:40:00.886626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.038 [2024-11-18 10:40:00.886641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:35.038 [2024-11-18 10:40:00.886692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:35.038 [2024-11-18 10:40:00.886707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:35.038 [2024-11-18 10:40:00.886839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.038 [2024-11-18 10:40:00.886847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.038 [2024-11-18 10:40:00.887100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:35.038 [2024-11-18 10:40:00.887266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.038 [2024-11-18 10:40:00.887280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:35.038 [2024-11-18 10:40:00.887404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.038 pt4 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.298 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.298 "name": "raid_bdev1", 00:11:35.298 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:35.298 "strip_size_kb": 0, 00:11:35.298 "state": "online", 00:11:35.298 "raid_level": "raid1", 00:11:35.298 "superblock": true, 00:11:35.298 "num_base_bdevs": 4, 00:11:35.298 "num_base_bdevs_discovered": 4, 00:11:35.298 "num_base_bdevs_operational": 4, 00:11:35.298 "base_bdevs_list": [ 00:11:35.298 { 00:11:35.298 "name": "pt1", 00:11:35.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.298 "is_configured": true, 00:11:35.298 "data_offset": 2048, 00:11:35.298 "data_size": 63488 00:11:35.298 }, 00:11:35.298 { 00:11:35.298 "name": "pt2", 00:11:35.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.298 "is_configured": true, 00:11:35.298 "data_offset": 2048, 00:11:35.298 "data_size": 63488 00:11:35.298 }, 00:11:35.298 { 00:11:35.298 "name": "pt3", 00:11:35.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.298 "is_configured": true, 00:11:35.298 "data_offset": 2048, 00:11:35.298 "data_size": 63488 00:11:35.298 }, 00:11:35.298 { 00:11:35.298 "name": "pt4", 00:11:35.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.298 "is_configured": true, 00:11:35.298 "data_offset": 2048, 00:11:35.298 "data_size": 63488 00:11:35.298 } 00:11:35.298 ] 00:11:35.298 }' 00:11:35.298 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.298 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.557 [2024-11-18 10:40:01.337682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.557 "name": "raid_bdev1", 00:11:35.557 "aliases": [ 00:11:35.557 "4aaa4844-2f1a-4418-a821-5c68c670debd" 00:11:35.557 ], 00:11:35.557 "product_name": "Raid Volume", 00:11:35.557 "block_size": 512, 00:11:35.557 "num_blocks": 63488, 00:11:35.557 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:35.557 "assigned_rate_limits": { 00:11:35.557 "rw_ios_per_sec": 0, 00:11:35.557 "rw_mbytes_per_sec": 0, 00:11:35.557 "r_mbytes_per_sec": 0, 00:11:35.557 "w_mbytes_per_sec": 0 00:11:35.557 }, 00:11:35.557 "claimed": false, 00:11:35.557 "zoned": false, 00:11:35.557 "supported_io_types": { 00:11:35.557 "read": true, 00:11:35.557 "write": true, 00:11:35.557 "unmap": false, 00:11:35.557 "flush": false, 00:11:35.557 "reset": true, 00:11:35.557 "nvme_admin": false, 00:11:35.557 "nvme_io": false, 00:11:35.557 "nvme_io_md": false, 00:11:35.557 "write_zeroes": true, 00:11:35.557 "zcopy": false, 00:11:35.557 "get_zone_info": false, 00:11:35.557 "zone_management": false, 00:11:35.557 "zone_append": false, 00:11:35.557 "compare": false, 00:11:35.557 "compare_and_write": false, 00:11:35.557 "abort": false, 00:11:35.557 "seek_hole": false, 00:11:35.557 "seek_data": false, 00:11:35.557 "copy": false, 00:11:35.557 "nvme_iov_md": false 00:11:35.557 }, 00:11:35.557 "memory_domains": [ 00:11:35.557 { 00:11:35.557 "dma_device_id": "system", 00:11:35.557 "dma_device_type": 1 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.557 "dma_device_type": 2 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "system", 00:11:35.557 "dma_device_type": 1 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.557 "dma_device_type": 2 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "system", 00:11:35.557 "dma_device_type": 1 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.557 "dma_device_type": 2 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "system", 00:11:35.557 "dma_device_type": 1 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.557 "dma_device_type": 2 00:11:35.557 } 00:11:35.557 ], 00:11:35.557 "driver_specific": { 00:11:35.557 "raid": { 00:11:35.557 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:35.557 "strip_size_kb": 0, 00:11:35.557 "state": "online", 00:11:35.557 "raid_level": "raid1", 00:11:35.557 "superblock": true, 00:11:35.557 "num_base_bdevs": 4, 00:11:35.557 "num_base_bdevs_discovered": 4, 00:11:35.557 "num_base_bdevs_operational": 4, 00:11:35.557 "base_bdevs_list": [ 00:11:35.557 { 00:11:35.557 "name": "pt1", 00:11:35.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.557 "is_configured": true, 00:11:35.557 "data_offset": 2048, 00:11:35.557 "data_size": 63488 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "name": "pt2", 00:11:35.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.557 "is_configured": true, 00:11:35.557 "data_offset": 2048, 00:11:35.557 "data_size": 63488 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "name": "pt3", 00:11:35.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.557 "is_configured": true, 00:11:35.557 "data_offset": 2048, 00:11:35.557 "data_size": 63488 00:11:35.557 }, 00:11:35.557 { 00:11:35.557 "name": "pt4", 00:11:35.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.557 "is_configured": true, 00:11:35.557 "data_offset": 2048, 00:11:35.557 "data_size": 63488 00:11:35.557 } 00:11:35.557 ] 00:11:35.557 } 00:11:35.557 } 00:11:35.557 }' 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:35.557 pt2 00:11:35.557 pt3 00:11:35.557 pt4' 00:11:35.557 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:35.816 [2024-11-18 10:40:01.597206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4aaa4844-2f1a-4418-a821-5c68c670debd '!=' 4aaa4844-2f1a-4418-a821-5c68c670debd ']' 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.816 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.817 [2024-11-18 10:40:01.648879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.817 "name": "raid_bdev1", 00:11:35.817 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:35.817 "strip_size_kb": 0, 00:11:35.817 "state": "online", 00:11:35.817 "raid_level": "raid1", 00:11:35.817 "superblock": true, 00:11:35.817 "num_base_bdevs": 4, 00:11:35.817 "num_base_bdevs_discovered": 3, 00:11:35.817 "num_base_bdevs_operational": 3, 00:11:35.817 "base_bdevs_list": [ 00:11:35.817 { 00:11:35.817 "name": null, 00:11:35.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.817 "is_configured": false, 00:11:35.817 "data_offset": 0, 00:11:35.817 "data_size": 63488 00:11:35.817 }, 00:11:35.817 { 00:11:35.817 "name": "pt2", 00:11:35.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.817 "is_configured": true, 00:11:35.817 "data_offset": 2048, 00:11:35.817 "data_size": 63488 00:11:35.817 }, 00:11:35.817 { 00:11:35.817 "name": "pt3", 00:11:35.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.817 "is_configured": true, 00:11:35.817 "data_offset": 2048, 00:11:35.817 "data_size": 63488 00:11:35.817 }, 00:11:35.817 { 00:11:35.817 "name": "pt4", 00:11:35.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.817 "is_configured": true, 00:11:35.817 "data_offset": 2048, 00:11:35.817 "data_size": 63488 00:11:35.817 } 00:11:35.817 ] 00:11:35.817 }' 00:11:35.817 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.076 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 [2024-11-18 10:40:02.096079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.335 [2024-11-18 10:40:02.096148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.335 [2024-11-18 10:40:02.096265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.335 [2024-11-18 10:40:02.096367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.335 [2024-11-18 10:40:02.096420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.335 [2024-11-18 10:40:02.191919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.335 [2024-11-18 10:40:02.191961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.335 [2024-11-18 10:40:02.191979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:36.335 [2024-11-18 10:40:02.191987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.335 [2024-11-18 10:40:02.194440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.335 [2024-11-18 10:40:02.194473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.335 [2024-11-18 10:40:02.194538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:36.335 [2024-11-18 10:40:02.194579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.335 pt2 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.335 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.594 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.594 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.594 "name": "raid_bdev1", 00:11:36.594 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:36.594 "strip_size_kb": 0, 00:11:36.594 "state": "configuring", 00:11:36.594 "raid_level": "raid1", 00:11:36.594 "superblock": true, 00:11:36.594 "num_base_bdevs": 4, 00:11:36.594 "num_base_bdevs_discovered": 1, 00:11:36.594 "num_base_bdevs_operational": 3, 00:11:36.594 "base_bdevs_list": [ 00:11:36.594 { 00:11:36.594 "name": null, 00:11:36.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.594 "is_configured": false, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 }, 00:11:36.594 { 00:11:36.594 "name": "pt2", 00:11:36.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.594 "is_configured": true, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 }, 00:11:36.594 { 00:11:36.594 "name": null, 00:11:36.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.594 "is_configured": false, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 }, 00:11:36.594 { 00:11:36.594 "name": null, 00:11:36.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.594 "is_configured": false, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 } 00:11:36.594 ] 00:11:36.594 }' 00:11:36.594 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.594 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.854 [2024-11-18 10:40:02.651131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.854 [2024-11-18 10:40:02.651242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.854 [2024-11-18 10:40:02.651279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:36.854 [2024-11-18 10:40:02.651312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.854 [2024-11-18 10:40:02.651733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.854 [2024-11-18 10:40:02.651785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.854 [2024-11-18 10:40:02.651882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.854 [2024-11-18 10:40:02.651927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.854 pt3 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.854 "name": "raid_bdev1", 00:11:36.854 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:36.854 "strip_size_kb": 0, 00:11:36.854 "state": "configuring", 00:11:36.854 "raid_level": "raid1", 00:11:36.854 "superblock": true, 00:11:36.854 "num_base_bdevs": 4, 00:11:36.854 "num_base_bdevs_discovered": 2, 00:11:36.854 "num_base_bdevs_operational": 3, 00:11:36.854 "base_bdevs_list": [ 00:11:36.854 { 00:11:36.854 "name": null, 00:11:36.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.854 "is_configured": false, 00:11:36.854 "data_offset": 2048, 00:11:36.854 "data_size": 63488 00:11:36.854 }, 00:11:36.854 { 00:11:36.854 "name": "pt2", 00:11:36.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.854 "is_configured": true, 00:11:36.854 "data_offset": 2048, 00:11:36.854 "data_size": 63488 00:11:36.854 }, 00:11:36.854 { 00:11:36.854 "name": "pt3", 00:11:36.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.854 "is_configured": true, 00:11:36.854 "data_offset": 2048, 00:11:36.854 "data_size": 63488 00:11:36.854 }, 00:11:36.854 { 00:11:36.854 "name": null, 00:11:36.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.854 "is_configured": false, 00:11:36.854 "data_offset": 2048, 00:11:36.854 "data_size": 63488 00:11:36.854 } 00:11:36.854 ] 00:11:36.854 }' 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.854 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.423 [2024-11-18 10:40:03.102347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:37.423 [2024-11-18 10:40:03.102393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.423 [2024-11-18 10:40:03.102410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:37.423 [2024-11-18 10:40:03.102418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.423 [2024-11-18 10:40:03.102794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.423 [2024-11-18 10:40:03.102809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:37.423 [2024-11-18 10:40:03.102870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:37.423 [2024-11-18 10:40:03.102894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:37.423 [2024-11-18 10:40:03.103021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:37.423 [2024-11-18 10:40:03.103029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.423 [2024-11-18 10:40:03.103291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:37.423 [2024-11-18 10:40:03.103443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:37.423 [2024-11-18 10:40:03.103463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:37.423 [2024-11-18 10:40:03.103601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.423 pt4 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.423 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.423 "name": "raid_bdev1", 00:11:37.423 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:37.423 "strip_size_kb": 0, 00:11:37.424 "state": "online", 00:11:37.424 "raid_level": "raid1", 00:11:37.424 "superblock": true, 00:11:37.424 "num_base_bdevs": 4, 00:11:37.424 "num_base_bdevs_discovered": 3, 00:11:37.424 "num_base_bdevs_operational": 3, 00:11:37.424 "base_bdevs_list": [ 00:11:37.424 { 00:11:37.424 "name": null, 00:11:37.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.424 "is_configured": false, 00:11:37.424 "data_offset": 2048, 00:11:37.424 "data_size": 63488 00:11:37.424 }, 00:11:37.424 { 00:11:37.424 "name": "pt2", 00:11:37.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.424 "is_configured": true, 00:11:37.424 "data_offset": 2048, 00:11:37.424 "data_size": 63488 00:11:37.424 }, 00:11:37.424 { 00:11:37.424 "name": "pt3", 00:11:37.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.424 "is_configured": true, 00:11:37.424 "data_offset": 2048, 00:11:37.424 "data_size": 63488 00:11:37.424 }, 00:11:37.424 { 00:11:37.424 "name": "pt4", 00:11:37.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.424 "is_configured": true, 00:11:37.424 "data_offset": 2048, 00:11:37.424 "data_size": 63488 00:11:37.424 } 00:11:37.424 ] 00:11:37.424 }' 00:11:37.424 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.424 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.683 [2024-11-18 10:40:03.489627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.683 [2024-11-18 10:40:03.489650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.683 [2024-11-18 10:40:03.489704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.683 [2024-11-18 10:40:03.489765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.683 [2024-11-18 10:40:03.489777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:37.683 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.684 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.684 [2024-11-18 10:40:03.561518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:37.684 [2024-11-18 10:40:03.561576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.684 [2024-11-18 10:40:03.561593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:37.684 [2024-11-18 10:40:03.561604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.684 [2024-11-18 10:40:03.564080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.684 [2024-11-18 10:40:03.564120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:37.684 [2024-11-18 10:40:03.564195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:37.684 [2024-11-18 10:40:03.564260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:37.684 [2024-11-18 10:40:03.564393] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:37.684 [2024-11-18 10:40:03.564411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.684 [2024-11-18 10:40:03.564425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:37.684 [2024-11-18 10:40:03.564505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.684 [2024-11-18 10:40:03.564603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.943 pt1 00:11:37.943 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.943 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:37.943 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:37.943 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.943 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.944 "name": "raid_bdev1", 00:11:37.944 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:37.944 "strip_size_kb": 0, 00:11:37.944 "state": "configuring", 00:11:37.944 "raid_level": "raid1", 00:11:37.944 "superblock": true, 00:11:37.944 "num_base_bdevs": 4, 00:11:37.944 "num_base_bdevs_discovered": 2, 00:11:37.944 "num_base_bdevs_operational": 3, 00:11:37.944 "base_bdevs_list": [ 00:11:37.944 { 00:11:37.944 "name": null, 00:11:37.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.944 "is_configured": false, 00:11:37.944 "data_offset": 2048, 00:11:37.944 "data_size": 63488 00:11:37.944 }, 00:11:37.944 { 00:11:37.944 "name": "pt2", 00:11:37.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.944 "is_configured": true, 00:11:37.944 "data_offset": 2048, 00:11:37.944 "data_size": 63488 00:11:37.944 }, 00:11:37.944 { 00:11:37.944 "name": "pt3", 00:11:37.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.944 "is_configured": true, 00:11:37.944 "data_offset": 2048, 00:11:37.944 "data_size": 63488 00:11:37.944 }, 00:11:37.944 { 00:11:37.944 "name": null, 00:11:37.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.944 "is_configured": false, 00:11:37.944 "data_offset": 2048, 00:11:37.944 "data_size": 63488 00:11:37.944 } 00:11:37.944 ] 00:11:37.944 }' 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.944 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.204 [2024-11-18 10:40:04.040696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:38.204 [2024-11-18 10:40:04.040785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.204 [2024-11-18 10:40:04.040819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:38.204 [2024-11-18 10:40:04.040846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.204 [2024-11-18 10:40:04.041245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.204 [2024-11-18 10:40:04.041301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:38.204 [2024-11-18 10:40:04.041392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:38.204 [2024-11-18 10:40:04.041447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:38.204 [2024-11-18 10:40:04.041584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:38.204 [2024-11-18 10:40:04.041618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.204 [2024-11-18 10:40:04.041878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:38.204 [2024-11-18 10:40:04.042061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:38.204 [2024-11-18 10:40:04.042099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:38.204 [2024-11-18 10:40:04.042277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.204 pt4 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.204 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.463 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.463 "name": "raid_bdev1", 00:11:38.463 "uuid": "4aaa4844-2f1a-4418-a821-5c68c670debd", 00:11:38.463 "strip_size_kb": 0, 00:11:38.463 "state": "online", 00:11:38.463 "raid_level": "raid1", 00:11:38.463 "superblock": true, 00:11:38.463 "num_base_bdevs": 4, 00:11:38.463 "num_base_bdevs_discovered": 3, 00:11:38.463 "num_base_bdevs_operational": 3, 00:11:38.463 "base_bdevs_list": [ 00:11:38.463 { 00:11:38.463 "name": null, 00:11:38.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.463 "is_configured": false, 00:11:38.463 "data_offset": 2048, 00:11:38.463 "data_size": 63488 00:11:38.463 }, 00:11:38.463 { 00:11:38.463 "name": "pt2", 00:11:38.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.463 "is_configured": true, 00:11:38.463 "data_offset": 2048, 00:11:38.463 "data_size": 63488 00:11:38.463 }, 00:11:38.463 { 00:11:38.463 "name": "pt3", 00:11:38.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.463 "is_configured": true, 00:11:38.463 "data_offset": 2048, 00:11:38.463 "data_size": 63488 00:11:38.463 }, 00:11:38.463 { 00:11:38.463 "name": "pt4", 00:11:38.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.463 "is_configured": true, 00:11:38.463 "data_offset": 2048, 00:11:38.463 "data_size": 63488 00:11:38.463 } 00:11:38.463 ] 00:11:38.463 }' 00:11:38.463 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.463 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 [2024-11-18 10:40:04.544109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4aaa4844-2f1a-4418-a821-5c68c670debd '!=' 4aaa4844-2f1a-4418-a821-5c68c670debd ']' 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74374 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74374 ']' 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74374 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.722 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74374 00:11:38.982 killing process with pid 74374 00:11:38.982 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.982 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.982 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74374' 00:11:38.982 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74374 00:11:38.982 [2024-11-18 10:40:04.628546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.982 [2024-11-18 10:40:04.628614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.982 [2024-11-18 10:40:04.628674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.982 [2024-11-18 10:40:04.628686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:38.982 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74374 00:11:39.241 [2024-11-18 10:40:05.046094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.621 ************************************ 00:11:40.621 END TEST raid_superblock_test 00:11:40.621 ************************************ 00:11:40.621 10:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:40.621 00:11:40.621 real 0m8.365s 00:11:40.621 user 0m12.981s 00:11:40.621 sys 0m1.580s 00:11:40.621 10:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.621 10:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.621 10:40:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:40.621 10:40:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.622 10:40:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.622 10:40:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.622 ************************************ 00:11:40.622 START TEST raid_read_error_test 00:11:40.622 ************************************ 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jtzkO6FIXy 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74861 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74861 00:11:40.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74861 ']' 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.622 10:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.622 [2024-11-18 10:40:06.380096] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:40.622 [2024-11-18 10:40:06.380237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74861 ] 00:11:40.881 [2024-11-18 10:40:06.558848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.881 [2024-11-18 10:40:06.685012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.140 [2024-11-18 10:40:06.911130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.140 [2024-11-18 10:40:06.911308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.399 BaseBdev1_malloc 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.399 true 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.399 [2024-11-18 10:40:07.243531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.399 [2024-11-18 10:40:07.243687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.399 [2024-11-18 10:40:07.243713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.399 [2024-11-18 10:40:07.243726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.399 [2024-11-18 10:40:07.246094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.399 [2024-11-18 10:40:07.246133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.399 BaseBdev1 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.399 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.658 BaseBdev2_malloc 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.658 true 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.658 [2024-11-18 10:40:07.315571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.658 [2024-11-18 10:40:07.315624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.658 [2024-11-18 10:40:07.315640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.658 [2024-11-18 10:40:07.315651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.658 [2024-11-18 10:40:07.317936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.658 [2024-11-18 10:40:07.318039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.658 BaseBdev2 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.658 BaseBdev3_malloc 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:41.658 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 true 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 [2024-11-18 10:40:07.419620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:41.659 [2024-11-18 10:40:07.419672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.659 [2024-11-18 10:40:07.419690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:41.659 [2024-11-18 10:40:07.419702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.659 [2024-11-18 10:40:07.421984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.659 [2024-11-18 10:40:07.422100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:41.659 BaseBdev3 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 BaseBdev4_malloc 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 true 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 [2024-11-18 10:40:07.493207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:41.659 [2024-11-18 10:40:07.493254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.659 [2024-11-18 10:40:07.493271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.659 [2024-11-18 10:40:07.493282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.659 [2024-11-18 10:40:07.495502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.659 [2024-11-18 10:40:07.495539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:41.659 BaseBdev4 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 [2024-11-18 10:40:07.505246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.659 [2024-11-18 10:40:07.507321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.659 [2024-11-18 10:40:07.507395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.659 [2024-11-18 10:40:07.507456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.659 [2024-11-18 10:40:07.507673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:41.659 [2024-11-18 10:40:07.507686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.659 [2024-11-18 10:40:07.507915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:41.659 [2024-11-18 10:40:07.508071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:41.659 [2024-11-18 10:40:07.508080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:41.659 [2024-11-18 10:40:07.508242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.659 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.918 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.918 "name": "raid_bdev1", 00:11:41.918 "uuid": "10547f07-58a1-47e0-ae8a-7173d7022d0e", 00:11:41.918 "strip_size_kb": 0, 00:11:41.918 "state": "online", 00:11:41.918 "raid_level": "raid1", 00:11:41.918 "superblock": true, 00:11:41.918 "num_base_bdevs": 4, 00:11:41.918 "num_base_bdevs_discovered": 4, 00:11:41.918 "num_base_bdevs_operational": 4, 00:11:41.918 "base_bdevs_list": [ 00:11:41.918 { 00:11:41.918 "name": "BaseBdev1", 00:11:41.918 "uuid": "1a5bb0ae-d81a-5c74-af34-8547c47ef0e9", 00:11:41.918 "is_configured": true, 00:11:41.918 "data_offset": 2048, 00:11:41.918 "data_size": 63488 00:11:41.918 }, 00:11:41.918 { 00:11:41.918 "name": "BaseBdev2", 00:11:41.918 "uuid": "630d9d3f-2155-51af-943d-25a572cd6f78", 00:11:41.918 "is_configured": true, 00:11:41.918 "data_offset": 2048, 00:11:41.918 "data_size": 63488 00:11:41.918 }, 00:11:41.918 { 00:11:41.918 "name": "BaseBdev3", 00:11:41.918 "uuid": "8978446f-e01a-5a7d-a7b4-9ad8d76e7c96", 00:11:41.918 "is_configured": true, 00:11:41.918 "data_offset": 2048, 00:11:41.918 "data_size": 63488 00:11:41.918 }, 00:11:41.918 { 00:11:41.918 "name": "BaseBdev4", 00:11:41.918 "uuid": "84f77ddc-2764-54d9-ac2c-a0924f521f0b", 00:11:41.918 "is_configured": true, 00:11:41.918 "data_offset": 2048, 00:11:41.918 "data_size": 63488 00:11:41.918 } 00:11:41.918 ] 00:11:41.918 }' 00:11:41.918 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.918 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.177 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:42.177 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:42.177 [2024-11-18 10:40:08.057726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.116 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.375 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.376 10:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.376 "name": "raid_bdev1", 00:11:43.376 "uuid": "10547f07-58a1-47e0-ae8a-7173d7022d0e", 00:11:43.376 "strip_size_kb": 0, 00:11:43.376 "state": "online", 00:11:43.376 "raid_level": "raid1", 00:11:43.376 "superblock": true, 00:11:43.376 "num_base_bdevs": 4, 00:11:43.376 "num_base_bdevs_discovered": 4, 00:11:43.376 "num_base_bdevs_operational": 4, 00:11:43.376 "base_bdevs_list": [ 00:11:43.376 { 00:11:43.376 "name": "BaseBdev1", 00:11:43.376 "uuid": "1a5bb0ae-d81a-5c74-af34-8547c47ef0e9", 00:11:43.376 "is_configured": true, 00:11:43.376 "data_offset": 2048, 00:11:43.376 "data_size": 63488 00:11:43.376 }, 00:11:43.376 { 00:11:43.376 "name": "BaseBdev2", 00:11:43.376 "uuid": "630d9d3f-2155-51af-943d-25a572cd6f78", 00:11:43.376 "is_configured": true, 00:11:43.376 "data_offset": 2048, 00:11:43.376 "data_size": 63488 00:11:43.376 }, 00:11:43.376 { 00:11:43.376 "name": "BaseBdev3", 00:11:43.376 "uuid": "8978446f-e01a-5a7d-a7b4-9ad8d76e7c96", 00:11:43.376 "is_configured": true, 00:11:43.376 "data_offset": 2048, 00:11:43.376 "data_size": 63488 00:11:43.376 }, 00:11:43.376 { 00:11:43.376 "name": "BaseBdev4", 00:11:43.376 "uuid": "84f77ddc-2764-54d9-ac2c-a0924f521f0b", 00:11:43.376 "is_configured": true, 00:11:43.376 "data_offset": 2048, 00:11:43.376 "data_size": 63488 00:11:43.376 } 00:11:43.376 ] 00:11:43.376 }' 00:11:43.376 10:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.376 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.634 [2024-11-18 10:40:09.373782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.634 [2024-11-18 10:40:09.373927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.634 [2024-11-18 10:40:09.376695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.634 [2024-11-18 10:40:09.376812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.634 [2024-11-18 10:40:09.376960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.634 [2024-11-18 10:40:09.377011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:43.634 { 00:11:43.634 "results": [ 00:11:43.634 { 00:11:43.634 "job": "raid_bdev1", 00:11:43.634 "core_mask": "0x1", 00:11:43.634 "workload": "randrw", 00:11:43.634 "percentage": 50, 00:11:43.634 "status": "finished", 00:11:43.634 "queue_depth": 1, 00:11:43.634 "io_size": 131072, 00:11:43.634 "runtime": 1.316608, 00:11:43.634 "iops": 8228.7210771923, 00:11:43.634 "mibps": 1028.5901346490375, 00:11:43.634 "io_failed": 0, 00:11:43.634 "io_timeout": 0, 00:11:43.634 "avg_latency_us": 119.10467209407872, 00:11:43.634 "min_latency_us": 22.022707423580787, 00:11:43.634 "max_latency_us": 1552.5449781659388 00:11:43.634 } 00:11:43.634 ], 00:11:43.634 "core_count": 1 00:11:43.634 } 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74861 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74861 ']' 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74861 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74861 00:11:43.634 killing process with pid 74861 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74861' 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74861 00:11:43.634 [2024-11-18 10:40:09.418903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.634 10:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74861 00:11:43.897 [2024-11-18 10:40:09.757586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jtzkO6FIXy 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:45.276 ************************************ 00:11:45.276 END TEST raid_read_error_test 00:11:45.276 ************************************ 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:45.276 00:11:45.276 real 0m4.709s 00:11:45.276 user 0m5.348s 00:11:45.276 sys 0m0.717s 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.276 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.276 10:40:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:45.276 10:40:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.276 10:40:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.276 10:40:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.276 ************************************ 00:11:45.276 START TEST raid_write_error_test 00:11:45.276 ************************************ 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pmWHqoPteC 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75001 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75001 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75001 ']' 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.276 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.536 [2024-11-18 10:40:11.160377] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:45.536 [2024-11-18 10:40:11.160585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75001 ] 00:11:45.536 [2024-11-18 10:40:11.333885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.796 [2024-11-18 10:40:11.468090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.056 [2024-11-18 10:40:11.693856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.056 [2024-11-18 10:40:11.693924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.316 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.316 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.316 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.316 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.316 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.316 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 BaseBdev1_malloc 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 true 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 [2024-11-18 10:40:12.060661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.316 [2024-11-18 10:40:12.060737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.316 [2024-11-18 10:40:12.060758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.316 [2024-11-18 10:40:12.060769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.316 [2024-11-18 10:40:12.063168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.316 [2024-11-18 10:40:12.063231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.316 BaseBdev1 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 BaseBdev2_malloc 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.316 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.317 true 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.317 [2024-11-18 10:40:12.132968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.317 [2024-11-18 10:40:12.133030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.317 [2024-11-18 10:40:12.133047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.317 [2024-11-18 10:40:12.133057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.317 [2024-11-18 10:40:12.135445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.317 [2024-11-18 10:40:12.135483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.317 BaseBdev2 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.317 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 BaseBdev3_malloc 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 true 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 [2024-11-18 10:40:12.240092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:46.577 [2024-11-18 10:40:12.240158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.577 [2024-11-18 10:40:12.240195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:46.577 [2024-11-18 10:40:12.240208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.577 [2024-11-18 10:40:12.242467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.577 [2024-11-18 10:40:12.242504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:46.577 BaseBdev3 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 BaseBdev4_malloc 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 true 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 [2024-11-18 10:40:12.310439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:46.577 [2024-11-18 10:40:12.310489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.577 [2024-11-18 10:40:12.310507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:46.577 [2024-11-18 10:40:12.310518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.577 [2024-11-18 10:40:12.312862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.577 [2024-11-18 10:40:12.312901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:46.577 BaseBdev4 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.577 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 [2024-11-18 10:40:12.322478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.577 [2024-11-18 10:40:12.324520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.577 [2024-11-18 10:40:12.324603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.578 [2024-11-18 10:40:12.324662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.578 [2024-11-18 10:40:12.324871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:46.578 [2024-11-18 10:40:12.324883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.578 [2024-11-18 10:40:12.325107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:46.578 [2024-11-18 10:40:12.325290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:46.578 [2024-11-18 10:40:12.325300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:46.578 [2024-11-18 10:40:12.325440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.578 "name": "raid_bdev1", 00:11:46.578 "uuid": "3da7c736-9d83-4a43-a4bb-3b40522c6220", 00:11:46.578 "strip_size_kb": 0, 00:11:46.578 "state": "online", 00:11:46.578 "raid_level": "raid1", 00:11:46.578 "superblock": true, 00:11:46.578 "num_base_bdevs": 4, 00:11:46.578 "num_base_bdevs_discovered": 4, 00:11:46.578 "num_base_bdevs_operational": 4, 00:11:46.578 "base_bdevs_list": [ 00:11:46.578 { 00:11:46.578 "name": "BaseBdev1", 00:11:46.578 "uuid": "79f85852-c01f-5a04-bada-824837584749", 00:11:46.578 "is_configured": true, 00:11:46.578 "data_offset": 2048, 00:11:46.578 "data_size": 63488 00:11:46.578 }, 00:11:46.578 { 00:11:46.578 "name": "BaseBdev2", 00:11:46.578 "uuid": "0206f29b-a256-5999-a0aa-d890f8491571", 00:11:46.578 "is_configured": true, 00:11:46.578 "data_offset": 2048, 00:11:46.578 "data_size": 63488 00:11:46.578 }, 00:11:46.578 { 00:11:46.578 "name": "BaseBdev3", 00:11:46.578 "uuid": "a56e258c-05d6-505d-85d0-ea664405fda2", 00:11:46.578 "is_configured": true, 00:11:46.578 "data_offset": 2048, 00:11:46.578 "data_size": 63488 00:11:46.578 }, 00:11:46.578 { 00:11:46.578 "name": "BaseBdev4", 00:11:46.578 "uuid": "892d12e6-78e4-53bb-83dd-023124dc4827", 00:11:46.578 "is_configured": true, 00:11:46.578 "data_offset": 2048, 00:11:46.578 "data_size": 63488 00:11:46.578 } 00:11:46.578 ] 00:11:46.578 }' 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.578 10:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.145 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.145 10:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.145 [2024-11-18 10:40:12.926833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.084 [2024-11-18 10:40:13.838143] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:48.084 [2024-11-18 10:40:13.838316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.084 [2024-11-18 10:40:13.838593] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.084 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.084 "name": "raid_bdev1", 00:11:48.084 "uuid": "3da7c736-9d83-4a43-a4bb-3b40522c6220", 00:11:48.084 "strip_size_kb": 0, 00:11:48.084 "state": "online", 00:11:48.084 "raid_level": "raid1", 00:11:48.084 "superblock": true, 00:11:48.084 "num_base_bdevs": 4, 00:11:48.084 "num_base_bdevs_discovered": 3, 00:11:48.084 "num_base_bdevs_operational": 3, 00:11:48.084 "base_bdevs_list": [ 00:11:48.084 { 00:11:48.084 "name": null, 00:11:48.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.085 "is_configured": false, 00:11:48.085 "data_offset": 0, 00:11:48.085 "data_size": 63488 00:11:48.085 }, 00:11:48.085 { 00:11:48.085 "name": "BaseBdev2", 00:11:48.085 "uuid": "0206f29b-a256-5999-a0aa-d890f8491571", 00:11:48.085 "is_configured": true, 00:11:48.085 "data_offset": 2048, 00:11:48.085 "data_size": 63488 00:11:48.085 }, 00:11:48.085 { 00:11:48.085 "name": "BaseBdev3", 00:11:48.085 "uuid": "a56e258c-05d6-505d-85d0-ea664405fda2", 00:11:48.085 "is_configured": true, 00:11:48.085 "data_offset": 2048, 00:11:48.085 "data_size": 63488 00:11:48.085 }, 00:11:48.085 { 00:11:48.085 "name": "BaseBdev4", 00:11:48.085 "uuid": "892d12e6-78e4-53bb-83dd-023124dc4827", 00:11:48.085 "is_configured": true, 00:11:48.085 "data_offset": 2048, 00:11:48.085 "data_size": 63488 00:11:48.085 } 00:11:48.085 ] 00:11:48.085 }' 00:11:48.085 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.085 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.655 [2024-11-18 10:40:14.306848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.655 [2024-11-18 10:40:14.307001] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.655 [2024-11-18 10:40:14.309594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.655 [2024-11-18 10:40:14.309638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.655 [2024-11-18 10:40:14.309741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.655 [2024-11-18 10:40:14.309754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.655 { 00:11:48.655 "results": [ 00:11:48.655 { 00:11:48.655 "job": "raid_bdev1", 00:11:48.655 "core_mask": "0x1", 00:11:48.655 "workload": "randrw", 00:11:48.655 "percentage": 50, 00:11:48.655 "status": "finished", 00:11:48.655 "queue_depth": 1, 00:11:48.655 "io_size": 131072, 00:11:48.655 "runtime": 1.380896, 00:11:48.655 "iops": 9245.446434778578, 00:11:48.655 "mibps": 1155.6808043473222, 00:11:48.655 "io_failed": 0, 00:11:48.655 "io_timeout": 0, 00:11:48.655 "avg_latency_us": 105.78084287308675, 00:11:48.655 "min_latency_us": 22.46986899563319, 00:11:48.655 "max_latency_us": 1280.6707423580785 00:11:48.655 } 00:11:48.655 ], 00:11:48.655 "core_count": 1 00:11:48.655 } 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75001 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75001 ']' 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75001 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75001 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75001' 00:11:48.655 killing process with pid 75001 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75001 00:11:48.655 [2024-11-18 10:40:14.357038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.655 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75001 00:11:48.915 [2024-11-18 10:40:14.699978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pmWHqoPteC 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.294 00:11:50.294 real 0m4.875s 00:11:50.294 user 0m5.686s 00:11:50.294 sys 0m0.689s 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.294 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.294 ************************************ 00:11:50.294 END TEST raid_write_error_test 00:11:50.294 ************************************ 00:11:50.294 10:40:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:50.294 10:40:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:50.294 10:40:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:50.294 10:40:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:50.294 10:40:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.294 10:40:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.294 ************************************ 00:11:50.294 START TEST raid_rebuild_test 00:11:50.294 ************************************ 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:50.294 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:50.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75150 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75150 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75150 ']' 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.295 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.295 [2024-11-18 10:40:16.094570] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:50.295 [2024-11-18 10:40:16.094772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:50.295 Zero copy mechanism will not be used. 00:11:50.295 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75150 ] 00:11:50.555 [2024-11-18 10:40:16.274012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.555 [2024-11-18 10:40:16.402979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.815 [2024-11-18 10:40:16.628092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.815 [2024-11-18 10:40:16.628308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.384 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.384 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.384 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.384 10:40:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.384 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 BaseBdev1_malloc 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 [2024-11-18 10:40:17.019920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.384 [2024-11-18 10:40:17.020068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.384 [2024-11-18 10:40:17.020138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.384 [2024-11-18 10:40:17.020205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.384 [2024-11-18 10:40:17.022616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.384 [2024-11-18 10:40:17.022707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.384 BaseBdev1 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 BaseBdev2_malloc 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 [2024-11-18 10:40:17.081006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.384 [2024-11-18 10:40:17.081140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.384 [2024-11-18 10:40:17.081209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.384 [2024-11-18 10:40:17.081285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.384 [2024-11-18 10:40:17.083682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.384 [2024-11-18 10:40:17.083762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.384 BaseBdev2 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 spare_malloc 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 spare_delay 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 [2024-11-18 10:40:17.182937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:51.384 [2024-11-18 10:40:17.183024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.384 [2024-11-18 10:40:17.183044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:51.384 [2024-11-18 10:40:17.183056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.384 [2024-11-18 10:40:17.185535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.384 [2024-11-18 10:40:17.185576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:51.384 spare 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 [2024-11-18 10:40:17.194990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.384 [2024-11-18 10:40:17.197156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.384 [2024-11-18 10:40:17.197251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:51.384 [2024-11-18 10:40:17.197265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:51.384 [2024-11-18 10:40:17.197493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:51.384 [2024-11-18 10:40:17.197699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:51.384 [2024-11-18 10:40:17.197721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:51.384 [2024-11-18 10:40:17.197869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.384 "name": "raid_bdev1", 00:11:51.384 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:11:51.384 "strip_size_kb": 0, 00:11:51.384 "state": "online", 00:11:51.384 "raid_level": "raid1", 00:11:51.384 "superblock": false, 00:11:51.384 "num_base_bdevs": 2, 00:11:51.384 "num_base_bdevs_discovered": 2, 00:11:51.384 "num_base_bdevs_operational": 2, 00:11:51.384 "base_bdevs_list": [ 00:11:51.384 { 00:11:51.384 "name": "BaseBdev1", 00:11:51.384 "uuid": "09d3af5a-99a9-5b32-abc8-6eca28c483cb", 00:11:51.384 "is_configured": true, 00:11:51.384 "data_offset": 0, 00:11:51.384 "data_size": 65536 00:11:51.384 }, 00:11:51.384 { 00:11:51.384 "name": "BaseBdev2", 00:11:51.384 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:11:51.384 "is_configured": true, 00:11:51.384 "data_offset": 0, 00:11:51.384 "data_size": 65536 00:11:51.384 } 00:11:51.384 ] 00:11:51.384 }' 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.384 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 [2024-11-18 10:40:17.638457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:51.952 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:52.210 [2024-11-18 10:40:17.917819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:52.210 /dev/nbd0 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.210 1+0 records in 00:11:52.210 1+0 records out 00:11:52.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546863 s, 7.5 MB/s 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.210 10:40:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:52.210 10:40:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.210 10:40:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.210 10:40:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:52.210 10:40:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:52.210 10:40:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:56.408 65536+0 records in 00:11:56.408 65536+0 records out 00:11:56.408 33554432 bytes (34 MB, 32 MiB) copied, 4.25468 s, 7.9 MB/s 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.408 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:56.668 [2024-11-18 10:40:22.477085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.668 [2024-11-18 10:40:22.493160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.668 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.927 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.927 "name": "raid_bdev1", 00:11:56.927 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:11:56.927 "strip_size_kb": 0, 00:11:56.927 "state": "online", 00:11:56.927 "raid_level": "raid1", 00:11:56.927 "superblock": false, 00:11:56.927 "num_base_bdevs": 2, 00:11:56.927 "num_base_bdevs_discovered": 1, 00:11:56.927 "num_base_bdevs_operational": 1, 00:11:56.927 "base_bdevs_list": [ 00:11:56.927 { 00:11:56.927 "name": null, 00:11:56.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.927 "is_configured": false, 00:11:56.927 "data_offset": 0, 00:11:56.927 "data_size": 65536 00:11:56.927 }, 00:11:56.927 { 00:11:56.927 "name": "BaseBdev2", 00:11:56.927 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:11:56.927 "is_configured": true, 00:11:56.927 "data_offset": 0, 00:11:56.927 "data_size": 65536 00:11:56.927 } 00:11:56.927 ] 00:11:56.927 }' 00:11:56.927 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.927 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.187 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:57.187 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.187 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.187 [2024-11-18 10:40:22.976326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.187 [2024-11-18 10:40:22.994981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:57.187 10:40:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.187 10:40:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:57.187 [2024-11-18 10:40:22.997165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.125 10:40:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.125 10:40:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.125 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.125 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.125 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.385 "name": "raid_bdev1", 00:11:58.385 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:11:58.385 "strip_size_kb": 0, 00:11:58.385 "state": "online", 00:11:58.385 "raid_level": "raid1", 00:11:58.385 "superblock": false, 00:11:58.385 "num_base_bdevs": 2, 00:11:58.385 "num_base_bdevs_discovered": 2, 00:11:58.385 "num_base_bdevs_operational": 2, 00:11:58.385 "process": { 00:11:58.385 "type": "rebuild", 00:11:58.385 "target": "spare", 00:11:58.385 "progress": { 00:11:58.385 "blocks": 20480, 00:11:58.385 "percent": 31 00:11:58.385 } 00:11:58.385 }, 00:11:58.385 "base_bdevs_list": [ 00:11:58.385 { 00:11:58.385 "name": "spare", 00:11:58.385 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:11:58.385 "is_configured": true, 00:11:58.385 "data_offset": 0, 00:11:58.385 "data_size": 65536 00:11:58.385 }, 00:11:58.385 { 00:11:58.385 "name": "BaseBdev2", 00:11:58.385 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:11:58.385 "is_configured": true, 00:11:58.385 "data_offset": 0, 00:11:58.385 "data_size": 65536 00:11:58.385 } 00:11:58.385 ] 00:11:58.385 }' 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.385 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.385 [2024-11-18 10:40:24.136327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.385 [2024-11-18 10:40:24.205866] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.385 [2024-11-18 10:40:24.205924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.385 [2024-11-18 10:40:24.205939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.386 [2024-11-18 10:40:24.205949] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.386 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.645 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.645 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.645 "name": "raid_bdev1", 00:11:58.645 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:11:58.645 "strip_size_kb": 0, 00:11:58.645 "state": "online", 00:11:58.645 "raid_level": "raid1", 00:11:58.645 "superblock": false, 00:11:58.645 "num_base_bdevs": 2, 00:11:58.645 "num_base_bdevs_discovered": 1, 00:11:58.645 "num_base_bdevs_operational": 1, 00:11:58.645 "base_bdevs_list": [ 00:11:58.645 { 00:11:58.645 "name": null, 00:11:58.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.645 "is_configured": false, 00:11:58.645 "data_offset": 0, 00:11:58.645 "data_size": 65536 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "name": "BaseBdev2", 00:11:58.645 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:11:58.645 "is_configured": true, 00:11:58.645 "data_offset": 0, 00:11:58.645 "data_size": 65536 00:11:58.645 } 00:11:58.645 ] 00:11:58.645 }' 00:11:58.645 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.645 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.905 "name": "raid_bdev1", 00:11:58.905 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:11:58.905 "strip_size_kb": 0, 00:11:58.905 "state": "online", 00:11:58.905 "raid_level": "raid1", 00:11:58.905 "superblock": false, 00:11:58.905 "num_base_bdevs": 2, 00:11:58.905 "num_base_bdevs_discovered": 1, 00:11:58.905 "num_base_bdevs_operational": 1, 00:11:58.905 "base_bdevs_list": [ 00:11:58.905 { 00:11:58.905 "name": null, 00:11:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.905 "is_configured": false, 00:11:58.905 "data_offset": 0, 00:11:58.905 "data_size": 65536 00:11:58.905 }, 00:11:58.905 { 00:11:58.905 "name": "BaseBdev2", 00:11:58.905 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:11:58.905 "is_configured": true, 00:11:58.905 "data_offset": 0, 00:11:58.905 "data_size": 65536 00:11:58.905 } 00:11:58.905 ] 00:11:58.905 }' 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.905 [2024-11-18 10:40:24.766002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.905 [2024-11-18 10:40:24.783565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.905 10:40:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:58.905 [2024-11-18 10:40:24.785680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.284 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.284 "name": "raid_bdev1", 00:12:00.284 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:12:00.284 "strip_size_kb": 0, 00:12:00.284 "state": "online", 00:12:00.284 "raid_level": "raid1", 00:12:00.284 "superblock": false, 00:12:00.284 "num_base_bdevs": 2, 00:12:00.284 "num_base_bdevs_discovered": 2, 00:12:00.284 "num_base_bdevs_operational": 2, 00:12:00.284 "process": { 00:12:00.284 "type": "rebuild", 00:12:00.284 "target": "spare", 00:12:00.284 "progress": { 00:12:00.284 "blocks": 20480, 00:12:00.284 "percent": 31 00:12:00.284 } 00:12:00.284 }, 00:12:00.284 "base_bdevs_list": [ 00:12:00.284 { 00:12:00.284 "name": "spare", 00:12:00.284 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:12:00.284 "is_configured": true, 00:12:00.284 "data_offset": 0, 00:12:00.284 "data_size": 65536 00:12:00.284 }, 00:12:00.284 { 00:12:00.284 "name": "BaseBdev2", 00:12:00.284 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:12:00.284 "is_configured": true, 00:12:00.285 "data_offset": 0, 00:12:00.285 "data_size": 65536 00:12:00.285 } 00:12:00.285 ] 00:12:00.285 }' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=367 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.285 "name": "raid_bdev1", 00:12:00.285 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:12:00.285 "strip_size_kb": 0, 00:12:00.285 "state": "online", 00:12:00.285 "raid_level": "raid1", 00:12:00.285 "superblock": false, 00:12:00.285 "num_base_bdevs": 2, 00:12:00.285 "num_base_bdevs_discovered": 2, 00:12:00.285 "num_base_bdevs_operational": 2, 00:12:00.285 "process": { 00:12:00.285 "type": "rebuild", 00:12:00.285 "target": "spare", 00:12:00.285 "progress": { 00:12:00.285 "blocks": 22528, 00:12:00.285 "percent": 34 00:12:00.285 } 00:12:00.285 }, 00:12:00.285 "base_bdevs_list": [ 00:12:00.285 { 00:12:00.285 "name": "spare", 00:12:00.285 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:12:00.285 "is_configured": true, 00:12:00.285 "data_offset": 0, 00:12:00.285 "data_size": 65536 00:12:00.285 }, 00:12:00.285 { 00:12:00.285 "name": "BaseBdev2", 00:12:00.285 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:12:00.285 "is_configured": true, 00:12:00.285 "data_offset": 0, 00:12:00.285 "data_size": 65536 00:12:00.285 } 00:12:00.285 ] 00:12:00.285 }' 00:12:00.285 10:40:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.285 10:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.285 10:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.285 10:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.285 10:40:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.223 10:40:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.482 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.482 "name": "raid_bdev1", 00:12:01.482 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:12:01.482 "strip_size_kb": 0, 00:12:01.482 "state": "online", 00:12:01.482 "raid_level": "raid1", 00:12:01.482 "superblock": false, 00:12:01.482 "num_base_bdevs": 2, 00:12:01.482 "num_base_bdevs_discovered": 2, 00:12:01.482 "num_base_bdevs_operational": 2, 00:12:01.482 "process": { 00:12:01.482 "type": "rebuild", 00:12:01.482 "target": "spare", 00:12:01.482 "progress": { 00:12:01.482 "blocks": 45056, 00:12:01.482 "percent": 68 00:12:01.482 } 00:12:01.482 }, 00:12:01.482 "base_bdevs_list": [ 00:12:01.482 { 00:12:01.482 "name": "spare", 00:12:01.482 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:12:01.482 "is_configured": true, 00:12:01.482 "data_offset": 0, 00:12:01.482 "data_size": 65536 00:12:01.482 }, 00:12:01.482 { 00:12:01.482 "name": "BaseBdev2", 00:12:01.482 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:12:01.482 "is_configured": true, 00:12:01.482 "data_offset": 0, 00:12:01.482 "data_size": 65536 00:12:01.482 } 00:12:01.482 ] 00:12:01.482 }' 00:12:01.482 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.482 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.482 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.482 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.482 10:40:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:02.419 [2024-11-18 10:40:28.008557] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:02.419 [2024-11-18 10:40:28.008743] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:02.419 [2024-11-18 10:40:28.008801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.419 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.420 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.420 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.420 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.420 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.420 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.420 "name": "raid_bdev1", 00:12:02.420 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:12:02.420 "strip_size_kb": 0, 00:12:02.420 "state": "online", 00:12:02.420 "raid_level": "raid1", 00:12:02.420 "superblock": false, 00:12:02.420 "num_base_bdevs": 2, 00:12:02.420 "num_base_bdevs_discovered": 2, 00:12:02.420 "num_base_bdevs_operational": 2, 00:12:02.420 "base_bdevs_list": [ 00:12:02.420 { 00:12:02.420 "name": "spare", 00:12:02.420 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:12:02.420 "is_configured": true, 00:12:02.420 "data_offset": 0, 00:12:02.420 "data_size": 65536 00:12:02.420 }, 00:12:02.420 { 00:12:02.420 "name": "BaseBdev2", 00:12:02.420 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:12:02.420 "is_configured": true, 00:12:02.420 "data_offset": 0, 00:12:02.420 "data_size": 65536 00:12:02.420 } 00:12:02.420 ] 00:12:02.420 }' 00:12:02.420 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.680 "name": "raid_bdev1", 00:12:02.680 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:12:02.680 "strip_size_kb": 0, 00:12:02.680 "state": "online", 00:12:02.680 "raid_level": "raid1", 00:12:02.680 "superblock": false, 00:12:02.680 "num_base_bdevs": 2, 00:12:02.680 "num_base_bdevs_discovered": 2, 00:12:02.680 "num_base_bdevs_operational": 2, 00:12:02.680 "base_bdevs_list": [ 00:12:02.680 { 00:12:02.680 "name": "spare", 00:12:02.680 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:12:02.680 "is_configured": true, 00:12:02.680 "data_offset": 0, 00:12:02.680 "data_size": 65536 00:12:02.680 }, 00:12:02.680 { 00:12:02.680 "name": "BaseBdev2", 00:12:02.680 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:12:02.680 "is_configured": true, 00:12:02.680 "data_offset": 0, 00:12:02.680 "data_size": 65536 00:12:02.680 } 00:12:02.680 ] 00:12:02.680 }' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.680 "name": "raid_bdev1", 00:12:02.680 "uuid": "3be5be8a-b3d9-4e15-b1bd-7cf23a01e497", 00:12:02.680 "strip_size_kb": 0, 00:12:02.680 "state": "online", 00:12:02.680 "raid_level": "raid1", 00:12:02.680 "superblock": false, 00:12:02.680 "num_base_bdevs": 2, 00:12:02.680 "num_base_bdevs_discovered": 2, 00:12:02.680 "num_base_bdevs_operational": 2, 00:12:02.680 "base_bdevs_list": [ 00:12:02.680 { 00:12:02.680 "name": "spare", 00:12:02.680 "uuid": "35fefa4c-264d-5d14-ae6f-a843f92bd33f", 00:12:02.680 "is_configured": true, 00:12:02.680 "data_offset": 0, 00:12:02.680 "data_size": 65536 00:12:02.680 }, 00:12:02.680 { 00:12:02.680 "name": "BaseBdev2", 00:12:02.680 "uuid": "d5baca55-8e31-5090-9f6e-c1d829c1ccda", 00:12:02.680 "is_configured": true, 00:12:02.680 "data_offset": 0, 00:12:02.680 "data_size": 65536 00:12:02.680 } 00:12:02.680 ] 00:12:02.680 }' 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.680 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.250 [2024-11-18 10:40:28.926031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.250 [2024-11-18 10:40:28.926121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.250 [2024-11-18 10:40:28.926239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.250 [2024-11-18 10:40:28.926333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.250 [2024-11-18 10:40:28.926344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.250 10:40:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:03.516 /dev/nbd0 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.516 1+0 records in 00:12:03.516 1+0 records out 00:12:03.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343707 s, 11.9 MB/s 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.516 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:03.517 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.517 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.517 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:03.775 /dev/nbd1 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.775 1+0 records in 00:12:03.775 1+0 records out 00:12:03.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412075 s, 9.9 MB/s 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.775 10:40:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.034 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.035 10:40:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75150 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75150 ']' 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75150 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75150 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.294 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.553 killing process with pid 75150 00:12:04.553 Received shutdown signal, test time was about 60.000000 seconds 00:12:04.553 00:12:04.553 Latency(us) 00:12:04.553 [2024-11-18T10:40:30.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.553 [2024-11-18T10:40:30.438Z] =================================================================================================================== 00:12:04.553 [2024-11-18T10:40:30.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:04.553 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75150' 00:12:04.553 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75150 00:12:04.553 [2024-11-18 10:40:30.177834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.553 10:40:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75150 00:12:04.812 [2024-11-18 10:40:30.497664] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:06.223 00:12:06.223 real 0m15.657s 00:12:06.223 user 0m17.531s 00:12:06.223 sys 0m3.219s 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.223 ************************************ 00:12:06.223 END TEST raid_rebuild_test 00:12:06.223 ************************************ 00:12:06.223 10:40:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:06.223 10:40:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:06.223 10:40:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.223 10:40:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.223 ************************************ 00:12:06.223 START TEST raid_rebuild_test_sb 00:12:06.223 ************************************ 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.223 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75577 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75577 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75577 ']' 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.224 10:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.224 [2024-11-18 10:40:31.849687] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:06.224 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:06.224 Zero copy mechanism will not be used. 00:12:06.224 [2024-11-18 10:40:31.850415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75577 ] 00:12:06.224 [2024-11-18 10:40:32.018797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.497 [2024-11-18 10:40:32.155724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.757 [2024-11-18 10:40:32.380639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.757 [2024-11-18 10:40:32.380796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.016 BaseBdev1_malloc 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.016 [2024-11-18 10:40:32.711196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:07.016 [2024-11-18 10:40:32.711472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.016 [2024-11-18 10:40:32.711510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:07.016 [2024-11-18 10:40:32.711523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.016 [2024-11-18 10:40:32.714006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.016 [2024-11-18 10:40:32.714046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.016 BaseBdev1 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.016 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 BaseBdev2_malloc 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 [2024-11-18 10:40:32.772802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:07.017 [2024-11-18 10:40:32.772869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.017 [2024-11-18 10:40:32.772891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:07.017 [2024-11-18 10:40:32.772905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.017 [2024-11-18 10:40:32.775276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.017 [2024-11-18 10:40:32.775381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:07.017 BaseBdev2 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 spare_malloc 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 spare_delay 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 [2024-11-18 10:40:32.851064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:07.017 [2024-11-18 10:40:32.851124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.017 [2024-11-18 10:40:32.851143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:07.017 [2024-11-18 10:40:32.851154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.017 [2024-11-18 10:40:32.853546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.017 [2024-11-18 10:40:32.853658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:07.017 spare 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 [2024-11-18 10:40:32.863122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.017 [2024-11-18 10:40:32.865199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.017 [2024-11-18 10:40:32.865374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:07.017 [2024-11-18 10:40:32.865391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.017 [2024-11-18 10:40:32.865641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:07.017 [2024-11-18 10:40:32.865821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:07.017 [2024-11-18 10:40:32.865830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:07.017 [2024-11-18 10:40:32.865961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.276 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.276 "name": "raid_bdev1", 00:12:07.276 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:07.276 "strip_size_kb": 0, 00:12:07.276 "state": "online", 00:12:07.276 "raid_level": "raid1", 00:12:07.276 "superblock": true, 00:12:07.276 "num_base_bdevs": 2, 00:12:07.276 "num_base_bdevs_discovered": 2, 00:12:07.276 "num_base_bdevs_operational": 2, 00:12:07.276 "base_bdevs_list": [ 00:12:07.276 { 00:12:07.276 "name": "BaseBdev1", 00:12:07.276 "uuid": "ee1c3205-a9c8-5e73-9111-9a363c630098", 00:12:07.276 "is_configured": true, 00:12:07.276 "data_offset": 2048, 00:12:07.276 "data_size": 63488 00:12:07.276 }, 00:12:07.276 { 00:12:07.276 "name": "BaseBdev2", 00:12:07.276 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:07.276 "is_configured": true, 00:12:07.276 "data_offset": 2048, 00:12:07.276 "data_size": 63488 00:12:07.276 } 00:12:07.276 ] 00:12:07.276 }' 00:12:07.276 10:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.276 10:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.535 [2024-11-18 10:40:33.358623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.535 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.536 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:07.794 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.794 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:07.794 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:07.795 [2024-11-18 10:40:33.617973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:07.795 /dev/nbd0 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.795 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.054 1+0 records in 00:12:08.054 1+0 records out 00:12:08.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385195 s, 10.6 MB/s 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:08.054 10:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:12.248 63488+0 records in 00:12:12.248 63488+0 records out 00:12:12.248 32505856 bytes (33 MB, 31 MiB) copied, 4.14238 s, 7.8 MB/s 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.248 10:40:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.248 [2024-11-18 10:40:38.041749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.248 [2024-11-18 10:40:38.073759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.248 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.249 "name": "raid_bdev1", 00:12:12.249 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:12.249 "strip_size_kb": 0, 00:12:12.249 "state": "online", 00:12:12.249 "raid_level": "raid1", 00:12:12.249 "superblock": true, 00:12:12.249 "num_base_bdevs": 2, 00:12:12.249 "num_base_bdevs_discovered": 1, 00:12:12.249 "num_base_bdevs_operational": 1, 00:12:12.249 "base_bdevs_list": [ 00:12:12.249 { 00:12:12.249 "name": null, 00:12:12.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.249 "is_configured": false, 00:12:12.249 "data_offset": 0, 00:12:12.249 "data_size": 63488 00:12:12.249 }, 00:12:12.249 { 00:12:12.249 "name": "BaseBdev2", 00:12:12.249 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:12.249 "is_configured": true, 00:12:12.249 "data_offset": 2048, 00:12:12.249 "data_size": 63488 00:12:12.249 } 00:12:12.249 ] 00:12:12.249 }' 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.249 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.818 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.818 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.818 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.818 [2024-11-18 10:40:38.529016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.818 [2024-11-18 10:40:38.546285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:12.818 10:40:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.818 [2024-11-18 10:40:38.548055] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.818 10:40:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.760 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.760 "name": "raid_bdev1", 00:12:13.760 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:13.760 "strip_size_kb": 0, 00:12:13.760 "state": "online", 00:12:13.760 "raid_level": "raid1", 00:12:13.760 "superblock": true, 00:12:13.760 "num_base_bdevs": 2, 00:12:13.760 "num_base_bdevs_discovered": 2, 00:12:13.760 "num_base_bdevs_operational": 2, 00:12:13.760 "process": { 00:12:13.760 "type": "rebuild", 00:12:13.760 "target": "spare", 00:12:13.760 "progress": { 00:12:13.760 "blocks": 20480, 00:12:13.760 "percent": 32 00:12:13.760 } 00:12:13.761 }, 00:12:13.761 "base_bdevs_list": [ 00:12:13.761 { 00:12:13.761 "name": "spare", 00:12:13.761 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:13.761 "is_configured": true, 00:12:13.761 "data_offset": 2048, 00:12:13.761 "data_size": 63488 00:12:13.761 }, 00:12:13.761 { 00:12:13.761 "name": "BaseBdev2", 00:12:13.761 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:13.761 "is_configured": true, 00:12:13.761 "data_offset": 2048, 00:12:13.761 "data_size": 63488 00:12:13.761 } 00:12:13.761 ] 00:12:13.761 }' 00:12:13.761 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.022 [2024-11-18 10:40:39.707302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.022 [2024-11-18 10:40:39.752740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.022 [2024-11-18 10:40:39.752828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.022 [2024-11-18 10:40:39.752844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.022 [2024-11-18 10:40:39.752855] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.022 "name": "raid_bdev1", 00:12:14.022 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:14.022 "strip_size_kb": 0, 00:12:14.022 "state": "online", 00:12:14.022 "raid_level": "raid1", 00:12:14.022 "superblock": true, 00:12:14.022 "num_base_bdevs": 2, 00:12:14.022 "num_base_bdevs_discovered": 1, 00:12:14.022 "num_base_bdevs_operational": 1, 00:12:14.022 "base_bdevs_list": [ 00:12:14.022 { 00:12:14.022 "name": null, 00:12:14.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.022 "is_configured": false, 00:12:14.022 "data_offset": 0, 00:12:14.022 "data_size": 63488 00:12:14.022 }, 00:12:14.022 { 00:12:14.022 "name": "BaseBdev2", 00:12:14.022 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:14.022 "is_configured": true, 00:12:14.022 "data_offset": 2048, 00:12:14.022 "data_size": 63488 00:12:14.022 } 00:12:14.022 ] 00:12:14.022 }' 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.022 10:40:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.592 "name": "raid_bdev1", 00:12:14.592 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:14.592 "strip_size_kb": 0, 00:12:14.592 "state": "online", 00:12:14.592 "raid_level": "raid1", 00:12:14.592 "superblock": true, 00:12:14.592 "num_base_bdevs": 2, 00:12:14.592 "num_base_bdevs_discovered": 1, 00:12:14.592 "num_base_bdevs_operational": 1, 00:12:14.592 "base_bdevs_list": [ 00:12:14.592 { 00:12:14.592 "name": null, 00:12:14.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.592 "is_configured": false, 00:12:14.592 "data_offset": 0, 00:12:14.592 "data_size": 63488 00:12:14.592 }, 00:12:14.592 { 00:12:14.592 "name": "BaseBdev2", 00:12:14.592 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:14.592 "is_configured": true, 00:12:14.592 "data_offset": 2048, 00:12:14.592 "data_size": 63488 00:12:14.592 } 00:12:14.592 ] 00:12:14.592 }' 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.592 [2024-11-18 10:40:40.370418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.592 [2024-11-18 10:40:40.385736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.592 10:40:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:14.592 [2024-11-18 10:40:40.387455] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.530 10:40:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.789 10:40:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.789 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.789 "name": "raid_bdev1", 00:12:15.789 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:15.789 "strip_size_kb": 0, 00:12:15.789 "state": "online", 00:12:15.789 "raid_level": "raid1", 00:12:15.789 "superblock": true, 00:12:15.789 "num_base_bdevs": 2, 00:12:15.789 "num_base_bdevs_discovered": 2, 00:12:15.789 "num_base_bdevs_operational": 2, 00:12:15.789 "process": { 00:12:15.789 "type": "rebuild", 00:12:15.789 "target": "spare", 00:12:15.789 "progress": { 00:12:15.789 "blocks": 20480, 00:12:15.790 "percent": 32 00:12:15.790 } 00:12:15.790 }, 00:12:15.790 "base_bdevs_list": [ 00:12:15.790 { 00:12:15.790 "name": "spare", 00:12:15.790 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:15.790 "is_configured": true, 00:12:15.790 "data_offset": 2048, 00:12:15.790 "data_size": 63488 00:12:15.790 }, 00:12:15.790 { 00:12:15.790 "name": "BaseBdev2", 00:12:15.790 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:15.790 "is_configured": true, 00:12:15.790 "data_offset": 2048, 00:12:15.790 "data_size": 63488 00:12:15.790 } 00:12:15.790 ] 00:12:15.790 }' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:15.790 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.790 "name": "raid_bdev1", 00:12:15.790 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:15.790 "strip_size_kb": 0, 00:12:15.790 "state": "online", 00:12:15.790 "raid_level": "raid1", 00:12:15.790 "superblock": true, 00:12:15.790 "num_base_bdevs": 2, 00:12:15.790 "num_base_bdevs_discovered": 2, 00:12:15.790 "num_base_bdevs_operational": 2, 00:12:15.790 "process": { 00:12:15.790 "type": "rebuild", 00:12:15.790 "target": "spare", 00:12:15.790 "progress": { 00:12:15.790 "blocks": 22528, 00:12:15.790 "percent": 35 00:12:15.790 } 00:12:15.790 }, 00:12:15.790 "base_bdevs_list": [ 00:12:15.790 { 00:12:15.790 "name": "spare", 00:12:15.790 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:15.790 "is_configured": true, 00:12:15.790 "data_offset": 2048, 00:12:15.790 "data_size": 63488 00:12:15.790 }, 00:12:15.790 { 00:12:15.790 "name": "BaseBdev2", 00:12:15.790 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:15.790 "is_configured": true, 00:12:15.790 "data_offset": 2048, 00:12:15.790 "data_size": 63488 00:12:15.790 } 00:12:15.790 ] 00:12:15.790 }' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.790 10:40:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.169 "name": "raid_bdev1", 00:12:17.169 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:17.169 "strip_size_kb": 0, 00:12:17.169 "state": "online", 00:12:17.169 "raid_level": "raid1", 00:12:17.169 "superblock": true, 00:12:17.169 "num_base_bdevs": 2, 00:12:17.169 "num_base_bdevs_discovered": 2, 00:12:17.169 "num_base_bdevs_operational": 2, 00:12:17.169 "process": { 00:12:17.169 "type": "rebuild", 00:12:17.169 "target": "spare", 00:12:17.169 "progress": { 00:12:17.169 "blocks": 45056, 00:12:17.169 "percent": 70 00:12:17.169 } 00:12:17.169 }, 00:12:17.169 "base_bdevs_list": [ 00:12:17.169 { 00:12:17.169 "name": "spare", 00:12:17.169 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:17.169 "is_configured": true, 00:12:17.169 "data_offset": 2048, 00:12:17.169 "data_size": 63488 00:12:17.169 }, 00:12:17.169 { 00:12:17.169 "name": "BaseBdev2", 00:12:17.169 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:17.169 "is_configured": true, 00:12:17.169 "data_offset": 2048, 00:12:17.169 "data_size": 63488 00:12:17.169 } 00:12:17.169 ] 00:12:17.169 }' 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.169 10:40:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.737 [2024-11-18 10:40:43.499247] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:17.737 [2024-11-18 10:40:43.499402] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:17.737 [2024-11-18 10:40:43.499517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.996 "name": "raid_bdev1", 00:12:17.996 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:17.996 "strip_size_kb": 0, 00:12:17.996 "state": "online", 00:12:17.996 "raid_level": "raid1", 00:12:17.996 "superblock": true, 00:12:17.996 "num_base_bdevs": 2, 00:12:17.996 "num_base_bdevs_discovered": 2, 00:12:17.996 "num_base_bdevs_operational": 2, 00:12:17.996 "base_bdevs_list": [ 00:12:17.996 { 00:12:17.996 "name": "spare", 00:12:17.996 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:17.996 "is_configured": true, 00:12:17.996 "data_offset": 2048, 00:12:17.996 "data_size": 63488 00:12:17.996 }, 00:12:17.996 { 00:12:17.996 "name": "BaseBdev2", 00:12:17.996 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:17.996 "is_configured": true, 00:12:17.996 "data_offset": 2048, 00:12:17.996 "data_size": 63488 00:12:17.996 } 00:12:17.996 ] 00:12:17.996 }' 00:12:17.996 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.255 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.256 "name": "raid_bdev1", 00:12:18.256 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:18.256 "strip_size_kb": 0, 00:12:18.256 "state": "online", 00:12:18.256 "raid_level": "raid1", 00:12:18.256 "superblock": true, 00:12:18.256 "num_base_bdevs": 2, 00:12:18.256 "num_base_bdevs_discovered": 2, 00:12:18.256 "num_base_bdevs_operational": 2, 00:12:18.256 "base_bdevs_list": [ 00:12:18.256 { 00:12:18.256 "name": "spare", 00:12:18.256 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:18.256 "is_configured": true, 00:12:18.256 "data_offset": 2048, 00:12:18.256 "data_size": 63488 00:12:18.256 }, 00:12:18.256 { 00:12:18.256 "name": "BaseBdev2", 00:12:18.256 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:18.256 "is_configured": true, 00:12:18.256 "data_offset": 2048, 00:12:18.256 "data_size": 63488 00:12:18.256 } 00:12:18.256 ] 00:12:18.256 }' 00:12:18.256 10:40:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.256 "name": "raid_bdev1", 00:12:18.256 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:18.256 "strip_size_kb": 0, 00:12:18.256 "state": "online", 00:12:18.256 "raid_level": "raid1", 00:12:18.256 "superblock": true, 00:12:18.256 "num_base_bdevs": 2, 00:12:18.256 "num_base_bdevs_discovered": 2, 00:12:18.256 "num_base_bdevs_operational": 2, 00:12:18.256 "base_bdevs_list": [ 00:12:18.256 { 00:12:18.256 "name": "spare", 00:12:18.256 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:18.256 "is_configured": true, 00:12:18.256 "data_offset": 2048, 00:12:18.256 "data_size": 63488 00:12:18.256 }, 00:12:18.256 { 00:12:18.256 "name": "BaseBdev2", 00:12:18.256 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:18.256 "is_configured": true, 00:12:18.256 "data_offset": 2048, 00:12:18.256 "data_size": 63488 00:12:18.256 } 00:12:18.256 ] 00:12:18.256 }' 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.256 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.822 [2024-11-18 10:40:44.479888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.822 [2024-11-18 10:40:44.479969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.822 [2024-11-18 10:40:44.480077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.822 [2024-11-18 10:40:44.480162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.822 [2024-11-18 10:40:44.480220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.822 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.823 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.823 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:18.823 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.823 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.823 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:19.081 /dev/nbd0 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.081 1+0 records in 00:12:19.081 1+0 records out 00:12:19.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346331 s, 11.8 MB/s 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.081 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:19.339 /dev/nbd1 00:12:19.340 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.340 10:40:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.340 10:40:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.340 1+0 records in 00:12:19.340 1+0 records out 00:12:19.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466099 s, 8.8 MB/s 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.340 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.598 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.599 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.858 [2024-11-18 10:40:45.625674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.858 [2024-11-18 10:40:45.625734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.858 [2024-11-18 10:40:45.625757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:19.858 [2024-11-18 10:40:45.625767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.858 [2024-11-18 10:40:45.628040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.858 [2024-11-18 10:40:45.628081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.858 [2024-11-18 10:40:45.628186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:19.858 [2024-11-18 10:40:45.628235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.858 [2024-11-18 10:40:45.628408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.858 spare 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.858 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.859 [2024-11-18 10:40:45.728306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:19.859 [2024-11-18 10:40:45.728334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.859 [2024-11-18 10:40:45.728603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:19.859 [2024-11-18 10:40:45.728764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:19.859 [2024-11-18 10:40:45.728777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:19.859 [2024-11-18 10:40:45.728934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.859 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.117 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.117 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.117 "name": "raid_bdev1", 00:12:20.117 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:20.117 "strip_size_kb": 0, 00:12:20.117 "state": "online", 00:12:20.117 "raid_level": "raid1", 00:12:20.117 "superblock": true, 00:12:20.117 "num_base_bdevs": 2, 00:12:20.117 "num_base_bdevs_discovered": 2, 00:12:20.117 "num_base_bdevs_operational": 2, 00:12:20.117 "base_bdevs_list": [ 00:12:20.117 { 00:12:20.117 "name": "spare", 00:12:20.117 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:20.117 "is_configured": true, 00:12:20.117 "data_offset": 2048, 00:12:20.117 "data_size": 63488 00:12:20.117 }, 00:12:20.117 { 00:12:20.117 "name": "BaseBdev2", 00:12:20.117 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:20.117 "is_configured": true, 00:12:20.117 "data_offset": 2048, 00:12:20.117 "data_size": 63488 00:12:20.117 } 00:12:20.117 ] 00:12:20.117 }' 00:12:20.117 10:40:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.117 10:40:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.378 "name": "raid_bdev1", 00:12:20.378 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:20.378 "strip_size_kb": 0, 00:12:20.378 "state": "online", 00:12:20.378 "raid_level": "raid1", 00:12:20.378 "superblock": true, 00:12:20.378 "num_base_bdevs": 2, 00:12:20.378 "num_base_bdevs_discovered": 2, 00:12:20.378 "num_base_bdevs_operational": 2, 00:12:20.378 "base_bdevs_list": [ 00:12:20.378 { 00:12:20.378 "name": "spare", 00:12:20.378 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:20.378 "is_configured": true, 00:12:20.378 "data_offset": 2048, 00:12:20.378 "data_size": 63488 00:12:20.378 }, 00:12:20.378 { 00:12:20.378 "name": "BaseBdev2", 00:12:20.378 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:20.378 "is_configured": true, 00:12:20.378 "data_offset": 2048, 00:12:20.378 "data_size": 63488 00:12:20.378 } 00:12:20.378 ] 00:12:20.378 }' 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.378 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.638 [2024-11-18 10:40:46.332514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.638 "name": "raid_bdev1", 00:12:20.638 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:20.638 "strip_size_kb": 0, 00:12:20.638 "state": "online", 00:12:20.638 "raid_level": "raid1", 00:12:20.638 "superblock": true, 00:12:20.638 "num_base_bdevs": 2, 00:12:20.638 "num_base_bdevs_discovered": 1, 00:12:20.638 "num_base_bdevs_operational": 1, 00:12:20.638 "base_bdevs_list": [ 00:12:20.638 { 00:12:20.638 "name": null, 00:12:20.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.638 "is_configured": false, 00:12:20.638 "data_offset": 0, 00:12:20.638 "data_size": 63488 00:12:20.638 }, 00:12:20.638 { 00:12:20.638 "name": "BaseBdev2", 00:12:20.638 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:20.638 "is_configured": true, 00:12:20.638 "data_offset": 2048, 00:12:20.638 "data_size": 63488 00:12:20.638 } 00:12:20.638 ] 00:12:20.638 }' 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.638 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.207 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.207 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.207 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.207 [2024-11-18 10:40:46.799761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.207 [2024-11-18 10:40:46.799943] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:21.207 [2024-11-18 10:40:46.799959] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:21.208 [2024-11-18 10:40:46.800000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.208 [2024-11-18 10:40:46.815666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:21.208 10:40:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.208 10:40:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:21.208 [2024-11-18 10:40:46.817472] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.146 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.146 "name": "raid_bdev1", 00:12:22.146 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:22.146 "strip_size_kb": 0, 00:12:22.146 "state": "online", 00:12:22.146 "raid_level": "raid1", 00:12:22.146 "superblock": true, 00:12:22.146 "num_base_bdevs": 2, 00:12:22.146 "num_base_bdevs_discovered": 2, 00:12:22.146 "num_base_bdevs_operational": 2, 00:12:22.146 "process": { 00:12:22.146 "type": "rebuild", 00:12:22.146 "target": "spare", 00:12:22.146 "progress": { 00:12:22.146 "blocks": 20480, 00:12:22.146 "percent": 32 00:12:22.146 } 00:12:22.146 }, 00:12:22.146 "base_bdevs_list": [ 00:12:22.146 { 00:12:22.146 "name": "spare", 00:12:22.146 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:22.146 "is_configured": true, 00:12:22.146 "data_offset": 2048, 00:12:22.146 "data_size": 63488 00:12:22.146 }, 00:12:22.146 { 00:12:22.146 "name": "BaseBdev2", 00:12:22.147 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:22.147 "is_configured": true, 00:12:22.147 "data_offset": 2048, 00:12:22.147 "data_size": 63488 00:12:22.147 } 00:12:22.147 ] 00:12:22.147 }' 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.147 10:40:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.147 [2024-11-18 10:40:47.981319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.147 [2024-11-18 10:40:48.022208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:22.147 [2024-11-18 10:40:48.022273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.147 [2024-11-18 10:40:48.022288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.147 [2024-11-18 10:40:48.022297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.407 "name": "raid_bdev1", 00:12:22.407 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:22.407 "strip_size_kb": 0, 00:12:22.407 "state": "online", 00:12:22.407 "raid_level": "raid1", 00:12:22.407 "superblock": true, 00:12:22.407 "num_base_bdevs": 2, 00:12:22.407 "num_base_bdevs_discovered": 1, 00:12:22.407 "num_base_bdevs_operational": 1, 00:12:22.407 "base_bdevs_list": [ 00:12:22.407 { 00:12:22.407 "name": null, 00:12:22.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.407 "is_configured": false, 00:12:22.407 "data_offset": 0, 00:12:22.407 "data_size": 63488 00:12:22.407 }, 00:12:22.407 { 00:12:22.407 "name": "BaseBdev2", 00:12:22.407 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:22.407 "is_configured": true, 00:12:22.407 "data_offset": 2048, 00:12:22.407 "data_size": 63488 00:12:22.407 } 00:12:22.407 ] 00:12:22.407 }' 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.407 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.667 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:22.668 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.668 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.668 [2024-11-18 10:40:48.527350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:22.668 [2024-11-18 10:40:48.527458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.668 [2024-11-18 10:40:48.527496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:22.668 [2024-11-18 10:40:48.527526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.668 [2024-11-18 10:40:48.527998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.668 [2024-11-18 10:40:48.528058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:22.668 [2024-11-18 10:40:48.528191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:22.668 [2024-11-18 10:40:48.528234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:22.668 [2024-11-18 10:40:48.528278] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:22.668 [2024-11-18 10:40:48.528366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.668 [2024-11-18 10:40:48.543395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:22.668 spare 00:12:22.668 10:40:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.668 [2024-11-18 10:40:48.545194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.668 10:40:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:24.082 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.082 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.082 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.082 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.082 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.083 "name": "raid_bdev1", 00:12:24.083 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:24.083 "strip_size_kb": 0, 00:12:24.083 "state": "online", 00:12:24.083 "raid_level": "raid1", 00:12:24.083 "superblock": true, 00:12:24.083 "num_base_bdevs": 2, 00:12:24.083 "num_base_bdevs_discovered": 2, 00:12:24.083 "num_base_bdevs_operational": 2, 00:12:24.083 "process": { 00:12:24.083 "type": "rebuild", 00:12:24.083 "target": "spare", 00:12:24.083 "progress": { 00:12:24.083 "blocks": 20480, 00:12:24.083 "percent": 32 00:12:24.083 } 00:12:24.083 }, 00:12:24.083 "base_bdevs_list": [ 00:12:24.083 { 00:12:24.083 "name": "spare", 00:12:24.083 "uuid": "b875cda4-5905-538c-b340-5e47082e030a", 00:12:24.083 "is_configured": true, 00:12:24.083 "data_offset": 2048, 00:12:24.083 "data_size": 63488 00:12:24.083 }, 00:12:24.083 { 00:12:24.083 "name": "BaseBdev2", 00:12:24.083 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:24.083 "is_configured": true, 00:12:24.083 "data_offset": 2048, 00:12:24.083 "data_size": 63488 00:12:24.083 } 00:12:24.083 ] 00:12:24.083 }' 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 [2024-11-18 10:40:49.700485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.083 [2024-11-18 10:40:49.749907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:24.083 [2024-11-18 10:40:49.749965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.083 [2024-11-18 10:40:49.749999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.083 [2024-11-18 10:40:49.750006] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.083 "name": "raid_bdev1", 00:12:24.083 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:24.083 "strip_size_kb": 0, 00:12:24.083 "state": "online", 00:12:24.083 "raid_level": "raid1", 00:12:24.083 "superblock": true, 00:12:24.083 "num_base_bdevs": 2, 00:12:24.083 "num_base_bdevs_discovered": 1, 00:12:24.083 "num_base_bdevs_operational": 1, 00:12:24.083 "base_bdevs_list": [ 00:12:24.083 { 00:12:24.083 "name": null, 00:12:24.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.083 "is_configured": false, 00:12:24.083 "data_offset": 0, 00:12:24.083 "data_size": 63488 00:12:24.083 }, 00:12:24.083 { 00:12:24.083 "name": "BaseBdev2", 00:12:24.083 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:24.083 "is_configured": true, 00:12:24.083 "data_offset": 2048, 00:12:24.083 "data_size": 63488 00:12:24.083 } 00:12:24.083 ] 00:12:24.083 }' 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.083 10:40:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.354 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.615 "name": "raid_bdev1", 00:12:24.615 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:24.615 "strip_size_kb": 0, 00:12:24.615 "state": "online", 00:12:24.615 "raid_level": "raid1", 00:12:24.615 "superblock": true, 00:12:24.615 "num_base_bdevs": 2, 00:12:24.615 "num_base_bdevs_discovered": 1, 00:12:24.615 "num_base_bdevs_operational": 1, 00:12:24.615 "base_bdevs_list": [ 00:12:24.615 { 00:12:24.615 "name": null, 00:12:24.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.615 "is_configured": false, 00:12:24.615 "data_offset": 0, 00:12:24.615 "data_size": 63488 00:12:24.615 }, 00:12:24.615 { 00:12:24.615 "name": "BaseBdev2", 00:12:24.615 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:24.615 "is_configured": true, 00:12:24.615 "data_offset": 2048, 00:12:24.615 "data_size": 63488 00:12:24.615 } 00:12:24.615 ] 00:12:24.615 }' 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.615 [2024-11-18 10:40:50.375139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.615 [2024-11-18 10:40:50.375205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.615 [2024-11-18 10:40:50.375228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:24.615 [2024-11-18 10:40:50.375245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.615 [2024-11-18 10:40:50.375678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.615 [2024-11-18 10:40:50.375702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.615 [2024-11-18 10:40:50.375782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:24.615 [2024-11-18 10:40:50.375803] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:24.615 [2024-11-18 10:40:50.375813] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:24.615 [2024-11-18 10:40:50.375823] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:24.615 BaseBdev1 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.615 10:40:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.554 "name": "raid_bdev1", 00:12:25.554 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:25.554 "strip_size_kb": 0, 00:12:25.554 "state": "online", 00:12:25.554 "raid_level": "raid1", 00:12:25.554 "superblock": true, 00:12:25.554 "num_base_bdevs": 2, 00:12:25.554 "num_base_bdevs_discovered": 1, 00:12:25.554 "num_base_bdevs_operational": 1, 00:12:25.554 "base_bdevs_list": [ 00:12:25.554 { 00:12:25.554 "name": null, 00:12:25.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.554 "is_configured": false, 00:12:25.554 "data_offset": 0, 00:12:25.554 "data_size": 63488 00:12:25.554 }, 00:12:25.554 { 00:12:25.554 "name": "BaseBdev2", 00:12:25.554 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:25.554 "is_configured": true, 00:12:25.554 "data_offset": 2048, 00:12:25.554 "data_size": 63488 00:12:25.554 } 00:12:25.554 ] 00:12:25.554 }' 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.554 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.123 "name": "raid_bdev1", 00:12:26.123 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:26.123 "strip_size_kb": 0, 00:12:26.123 "state": "online", 00:12:26.123 "raid_level": "raid1", 00:12:26.123 "superblock": true, 00:12:26.123 "num_base_bdevs": 2, 00:12:26.123 "num_base_bdevs_discovered": 1, 00:12:26.123 "num_base_bdevs_operational": 1, 00:12:26.123 "base_bdevs_list": [ 00:12:26.123 { 00:12:26.123 "name": null, 00:12:26.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.123 "is_configured": false, 00:12:26.123 "data_offset": 0, 00:12:26.123 "data_size": 63488 00:12:26.123 }, 00:12:26.123 { 00:12:26.123 "name": "BaseBdev2", 00:12:26.123 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:26.123 "is_configured": true, 00:12:26.123 "data_offset": 2048, 00:12:26.123 "data_size": 63488 00:12:26.123 } 00:12:26.123 ] 00:12:26.123 }' 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.123 [2024-11-18 10:40:51.968731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.123 [2024-11-18 10:40:51.968893] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:26.123 [2024-11-18 10:40:51.968908] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:26.123 request: 00:12:26.123 { 00:12:26.123 "base_bdev": "BaseBdev1", 00:12:26.123 "raid_bdev": "raid_bdev1", 00:12:26.123 "method": "bdev_raid_add_base_bdev", 00:12:26.123 "req_id": 1 00:12:26.123 } 00:12:26.123 Got JSON-RPC error response 00:12:26.123 response: 00:12:26.123 { 00:12:26.123 "code": -22, 00:12:26.123 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:26.123 } 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.123 10:40:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.504 10:40:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.504 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.504 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.504 "name": "raid_bdev1", 00:12:27.504 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:27.504 "strip_size_kb": 0, 00:12:27.504 "state": "online", 00:12:27.504 "raid_level": "raid1", 00:12:27.504 "superblock": true, 00:12:27.504 "num_base_bdevs": 2, 00:12:27.504 "num_base_bdevs_discovered": 1, 00:12:27.504 "num_base_bdevs_operational": 1, 00:12:27.504 "base_bdevs_list": [ 00:12:27.504 { 00:12:27.504 "name": null, 00:12:27.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.504 "is_configured": false, 00:12:27.504 "data_offset": 0, 00:12:27.504 "data_size": 63488 00:12:27.504 }, 00:12:27.504 { 00:12:27.504 "name": "BaseBdev2", 00:12:27.504 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:27.504 "is_configured": true, 00:12:27.504 "data_offset": 2048, 00:12:27.504 "data_size": 63488 00:12:27.504 } 00:12:27.504 ] 00:12:27.504 }' 00:12:27.504 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.504 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.764 "name": "raid_bdev1", 00:12:27.764 "uuid": "745aba69-5d32-47b3-a013-ed80d57a9147", 00:12:27.764 "strip_size_kb": 0, 00:12:27.764 "state": "online", 00:12:27.764 "raid_level": "raid1", 00:12:27.764 "superblock": true, 00:12:27.764 "num_base_bdevs": 2, 00:12:27.764 "num_base_bdevs_discovered": 1, 00:12:27.764 "num_base_bdevs_operational": 1, 00:12:27.764 "base_bdevs_list": [ 00:12:27.764 { 00:12:27.764 "name": null, 00:12:27.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.764 "is_configured": false, 00:12:27.764 "data_offset": 0, 00:12:27.764 "data_size": 63488 00:12:27.764 }, 00:12:27.764 { 00:12:27.764 "name": "BaseBdev2", 00:12:27.764 "uuid": "a4f23d2d-fc21-58e7-b76c-0c47c17075ae", 00:12:27.764 "is_configured": true, 00:12:27.764 "data_offset": 2048, 00:12:27.764 "data_size": 63488 00:12:27.764 } 00:12:27.764 ] 00:12:27.764 }' 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75577 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75577 ']' 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75577 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.764 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75577 00:12:28.023 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.023 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.023 killing process with pid 75577 00:12:28.023 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75577' 00:12:28.023 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75577 00:12:28.023 Received shutdown signal, test time was about 60.000000 seconds 00:12:28.023 00:12:28.023 Latency(us) 00:12:28.023 [2024-11-18T10:40:53.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.023 [2024-11-18T10:40:53.908Z] =================================================================================================================== 00:12:28.023 [2024-11-18T10:40:53.908Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:28.023 [2024-11-18 10:40:53.649862] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.023 [2024-11-18 10:40:53.649990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.023 10:40:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75577 00:12:28.023 [2024-11-18 10:40:53.650046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.023 [2024-11-18 10:40:53.650058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:28.282 [2024-11-18 10:40:53.938113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.220 10:40:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:29.220 00:12:29.220 real 0m23.245s 00:12:29.220 user 0m27.788s 00:12:29.220 sys 0m4.038s 00:12:29.220 10:40:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.220 10:40:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.220 ************************************ 00:12:29.220 END TEST raid_rebuild_test_sb 00:12:29.220 ************************************ 00:12:29.220 10:40:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:29.220 10:40:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:29.220 10:40:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.220 10:40:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.220 ************************************ 00:12:29.220 START TEST raid_rebuild_test_io 00:12:29.220 ************************************ 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76302 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76302 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76302 ']' 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.220 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.479 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:29.479 Zero copy mechanism will not be used. 00:12:29.479 [2024-11-18 10:40:55.153582] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:29.479 [2024-11-18 10:40:55.153709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76302 ] 00:12:29.479 [2024-11-18 10:40:55.324191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.738 [2024-11-18 10:40:55.428422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.738 [2024-11-18 10:40:55.607708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.738 [2024-11-18 10:40:55.607766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.307 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.307 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:30.307 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:30.307 10:40:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.307 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.307 10:40:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.307 BaseBdev1_malloc 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.307 [2024-11-18 10:40:56.014672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.307 [2024-11-18 10:40:56.014739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.307 [2024-11-18 10:40:56.014761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:30.307 [2024-11-18 10:40:56.014772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.307 [2024-11-18 10:40:56.016741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.307 [2024-11-18 10:40:56.016781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.307 BaseBdev1 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.307 BaseBdev2_malloc 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.307 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.308 [2024-11-18 10:40:56.070205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:30.308 [2024-11-18 10:40:56.070292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.308 [2024-11-18 10:40:56.070314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:30.308 [2024-11-18 10:40:56.070325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.308 [2024-11-18 10:40:56.072419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.308 [2024-11-18 10:40:56.072460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.308 BaseBdev2 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.308 spare_malloc 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.308 spare_delay 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.308 [2024-11-18 10:40:56.162535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.308 [2024-11-18 10:40:56.162592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.308 [2024-11-18 10:40:56.162612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:30.308 [2024-11-18 10:40:56.162621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.308 [2024-11-18 10:40:56.164629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.308 [2024-11-18 10:40:56.164668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.308 spare 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.308 [2024-11-18 10:40:56.174579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.308 [2024-11-18 10:40:56.176358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.308 [2024-11-18 10:40:56.176443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:30.308 [2024-11-18 10:40:56.176456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:30.308 [2024-11-18 10:40:56.176707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:30.308 [2024-11-18 10:40:56.176886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:30.308 [2024-11-18 10:40:56.176904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:30.308 [2024-11-18 10:40:56.177062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.308 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.568 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.568 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.568 "name": "raid_bdev1", 00:12:30.568 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:30.568 "strip_size_kb": 0, 00:12:30.568 "state": "online", 00:12:30.568 "raid_level": "raid1", 00:12:30.568 "superblock": false, 00:12:30.568 "num_base_bdevs": 2, 00:12:30.568 "num_base_bdevs_discovered": 2, 00:12:30.568 "num_base_bdevs_operational": 2, 00:12:30.568 "base_bdevs_list": [ 00:12:30.568 { 00:12:30.568 "name": "BaseBdev1", 00:12:30.568 "uuid": "c55dd322-a79c-56e6-9bbf-c4e7eb40ef9a", 00:12:30.568 "is_configured": true, 00:12:30.568 "data_offset": 0, 00:12:30.568 "data_size": 65536 00:12:30.568 }, 00:12:30.568 { 00:12:30.568 "name": "BaseBdev2", 00:12:30.568 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:30.568 "is_configured": true, 00:12:30.568 "data_offset": 0, 00:12:30.568 "data_size": 65536 00:12:30.568 } 00:12:30.568 ] 00:12:30.568 }' 00:12:30.568 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.568 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.828 [2024-11-18 10:40:56.606098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.828 [2024-11-18 10:40:56.701623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.828 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.087 "name": "raid_bdev1", 00:12:31.087 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:31.087 "strip_size_kb": 0, 00:12:31.087 "state": "online", 00:12:31.087 "raid_level": "raid1", 00:12:31.087 "superblock": false, 00:12:31.087 "num_base_bdevs": 2, 00:12:31.087 "num_base_bdevs_discovered": 1, 00:12:31.087 "num_base_bdevs_operational": 1, 00:12:31.087 "base_bdevs_list": [ 00:12:31.087 { 00:12:31.087 "name": null, 00:12:31.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.087 "is_configured": false, 00:12:31.087 "data_offset": 0, 00:12:31.087 "data_size": 65536 00:12:31.087 }, 00:12:31.087 { 00:12:31.087 "name": "BaseBdev2", 00:12:31.087 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:31.087 "is_configured": true, 00:12:31.087 "data_offset": 0, 00:12:31.087 "data_size": 65536 00:12:31.087 } 00:12:31.087 ] 00:12:31.087 }' 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.087 10:40:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.087 [2024-11-18 10:40:56.797686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:31.087 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.087 Zero copy mechanism will not be used. 00:12:31.087 Running I/O for 60 seconds... 00:12:31.347 10:40:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.347 10:40:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.347 10:40:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.347 [2024-11-18 10:40:57.153254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.347 10:40:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.347 10:40:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:31.347 [2024-11-18 10:40:57.216562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:31.347 [2024-11-18 10:40:57.218321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.606 [2024-11-18 10:40:57.335944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:31.606 [2024-11-18 10:40:57.336446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:31.865 [2024-11-18 10:40:57.573112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:31.865 [2024-11-18 10:40:57.573388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:32.124 174.00 IOPS, 522.00 MiB/s [2024-11-18T10:40:58.009Z] [2024-11-18 10:40:57.911013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.385 "name": "raid_bdev1", 00:12:32.385 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:32.385 "strip_size_kb": 0, 00:12:32.385 "state": "online", 00:12:32.385 "raid_level": "raid1", 00:12:32.385 "superblock": false, 00:12:32.385 "num_base_bdevs": 2, 00:12:32.385 "num_base_bdevs_discovered": 2, 00:12:32.385 "num_base_bdevs_operational": 2, 00:12:32.385 "process": { 00:12:32.385 "type": "rebuild", 00:12:32.385 "target": "spare", 00:12:32.385 "progress": { 00:12:32.385 "blocks": 10240, 00:12:32.385 "percent": 15 00:12:32.385 } 00:12:32.385 }, 00:12:32.385 "base_bdevs_list": [ 00:12:32.385 { 00:12:32.385 "name": "spare", 00:12:32.385 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:32.385 "is_configured": true, 00:12:32.385 "data_offset": 0, 00:12:32.385 "data_size": 65536 00:12:32.385 }, 00:12:32.385 { 00:12:32.385 "name": "BaseBdev2", 00:12:32.385 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:32.385 "is_configured": true, 00:12:32.385 "data_offset": 0, 00:12:32.385 "data_size": 65536 00:12:32.385 } 00:12:32.385 ] 00:12:32.385 }' 00:12:32.385 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.645 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.645 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.645 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.645 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.645 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.645 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.646 [2024-11-18 10:40:58.331379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.646 [2024-11-18 10:40:58.355788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:32.646 [2024-11-18 10:40:58.461038] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:32.646 [2024-11-18 10:40:58.474615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.646 [2024-11-18 10:40:58.474657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.646 [2024-11-18 10:40:58.474672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:32.646 [2024-11-18 10:40:58.511985] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.646 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.906 "name": "raid_bdev1", 00:12:32.906 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:32.906 "strip_size_kb": 0, 00:12:32.906 "state": "online", 00:12:32.906 "raid_level": "raid1", 00:12:32.906 "superblock": false, 00:12:32.906 "num_base_bdevs": 2, 00:12:32.906 "num_base_bdevs_discovered": 1, 00:12:32.906 "num_base_bdevs_operational": 1, 00:12:32.906 "base_bdevs_list": [ 00:12:32.906 { 00:12:32.906 "name": null, 00:12:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.906 "is_configured": false, 00:12:32.906 "data_offset": 0, 00:12:32.906 "data_size": 65536 00:12:32.906 }, 00:12:32.906 { 00:12:32.906 "name": "BaseBdev2", 00:12:32.906 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:32.906 "is_configured": true, 00:12:32.906 "data_offset": 0, 00:12:32.906 "data_size": 65536 00:12:32.906 } 00:12:32.906 ] 00:12:32.906 }' 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.906 10:40:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.165 146.00 IOPS, 438.00 MiB/s [2024-11-18T10:40:59.050Z] 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.165 10:40:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.424 "name": "raid_bdev1", 00:12:33.424 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:33.424 "strip_size_kb": 0, 00:12:33.424 "state": "online", 00:12:33.424 "raid_level": "raid1", 00:12:33.424 "superblock": false, 00:12:33.424 "num_base_bdevs": 2, 00:12:33.424 "num_base_bdevs_discovered": 1, 00:12:33.424 "num_base_bdevs_operational": 1, 00:12:33.424 "base_bdevs_list": [ 00:12:33.424 { 00:12:33.424 "name": null, 00:12:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.424 "is_configured": false, 00:12:33.424 "data_offset": 0, 00:12:33.424 "data_size": 65536 00:12:33.424 }, 00:12:33.424 { 00:12:33.424 "name": "BaseBdev2", 00:12:33.424 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:33.424 "is_configured": true, 00:12:33.424 "data_offset": 0, 00:12:33.424 "data_size": 65536 00:12:33.424 } 00:12:33.424 ] 00:12:33.424 }' 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.424 [2024-11-18 10:40:59.136465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.424 10:40:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:33.424 [2024-11-18 10:40:59.183076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:33.424 [2024-11-18 10:40:59.185318] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.424 [2024-11-18 10:40:59.305747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.424 [2024-11-18 10:40:59.306725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.682 [2024-11-18 10:40:59.525565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:33.682 [2024-11-18 10:40:59.526093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.201 167.67 IOPS, 503.00 MiB/s [2024-11-18T10:41:00.086Z] [2024-11-18 10:40:59.853627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:34.201 [2024-11-18 10:40:59.968331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:34.201 [2024-11-18 10:40:59.974030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.460 "name": "raid_bdev1", 00:12:34.460 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:34.460 "strip_size_kb": 0, 00:12:34.460 "state": "online", 00:12:34.460 "raid_level": "raid1", 00:12:34.460 "superblock": false, 00:12:34.460 "num_base_bdevs": 2, 00:12:34.460 "num_base_bdevs_discovered": 2, 00:12:34.460 "num_base_bdevs_operational": 2, 00:12:34.460 "process": { 00:12:34.460 "type": "rebuild", 00:12:34.460 "target": "spare", 00:12:34.460 "progress": { 00:12:34.460 "blocks": 12288, 00:12:34.460 "percent": 18 00:12:34.460 } 00:12:34.460 }, 00:12:34.460 "base_bdevs_list": [ 00:12:34.460 { 00:12:34.460 "name": "spare", 00:12:34.460 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:34.460 "is_configured": true, 00:12:34.460 "data_offset": 0, 00:12:34.460 "data_size": 65536 00:12:34.460 }, 00:12:34.460 { 00:12:34.460 "name": "BaseBdev2", 00:12:34.460 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:34.460 "is_configured": true, 00:12:34.460 "data_offset": 0, 00:12:34.460 "data_size": 65536 00:12:34.460 } 00:12:34.460 ] 00:12:34.460 }' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.460 [2024-11-18 10:41:00.304293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.460 10:41:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.720 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.720 "name": "raid_bdev1", 00:12:34.720 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:34.720 "strip_size_kb": 0, 00:12:34.720 "state": "online", 00:12:34.720 "raid_level": "raid1", 00:12:34.720 "superblock": false, 00:12:34.720 "num_base_bdevs": 2, 00:12:34.720 "num_base_bdevs_discovered": 2, 00:12:34.720 "num_base_bdevs_operational": 2, 00:12:34.720 "process": { 00:12:34.720 "type": "rebuild", 00:12:34.720 "target": "spare", 00:12:34.720 "progress": { 00:12:34.720 "blocks": 14336, 00:12:34.720 "percent": 21 00:12:34.720 } 00:12:34.720 }, 00:12:34.720 "base_bdevs_list": [ 00:12:34.720 { 00:12:34.720 "name": "spare", 00:12:34.720 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:34.720 "is_configured": true, 00:12:34.720 "data_offset": 0, 00:12:34.720 "data_size": 65536 00:12:34.720 }, 00:12:34.720 { 00:12:34.720 "name": "BaseBdev2", 00:12:34.720 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:34.720 "is_configured": true, 00:12:34.720 "data_offset": 0, 00:12:34.720 "data_size": 65536 00:12:34.720 } 00:12:34.720 ] 00:12:34.720 }' 00:12:34.720 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.720 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.720 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.720 [2024-11-18 10:41:00.419959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:34.720 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.720 10:41:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.549 145.75 IOPS, 437.25 MiB/s [2024-11-18T10:41:01.434Z] [2024-11-18 10:41:01.129581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:35.549 [2024-11-18 10:41:01.130026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.809 "name": "raid_bdev1", 00:12:35.809 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:35.809 "strip_size_kb": 0, 00:12:35.809 "state": "online", 00:12:35.809 "raid_level": "raid1", 00:12:35.809 "superblock": false, 00:12:35.809 "num_base_bdevs": 2, 00:12:35.809 "num_base_bdevs_discovered": 2, 00:12:35.809 "num_base_bdevs_operational": 2, 00:12:35.809 "process": { 00:12:35.809 "type": "rebuild", 00:12:35.809 "target": "spare", 00:12:35.809 "progress": { 00:12:35.809 "blocks": 28672, 00:12:35.809 "percent": 43 00:12:35.809 } 00:12:35.809 }, 00:12:35.809 "base_bdevs_list": [ 00:12:35.809 { 00:12:35.809 "name": "spare", 00:12:35.809 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:35.809 "is_configured": true, 00:12:35.809 "data_offset": 0, 00:12:35.809 "data_size": 65536 00:12:35.809 }, 00:12:35.809 { 00:12:35.809 "name": "BaseBdev2", 00:12:35.809 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:35.809 "is_configured": true, 00:12:35.809 "data_offset": 0, 00:12:35.809 "data_size": 65536 00:12:35.809 } 00:12:35.809 ] 00:12:35.809 }' 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.809 [2024-11-18 10:41:01.567274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.809 10:41:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.068 [2024-11-18 10:41:01.782332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:36.636 126.40 IOPS, 379.20 MiB/s [2024-11-18T10:41:02.521Z] [2024-11-18 10:41:02.399196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.895 "name": "raid_bdev1", 00:12:36.895 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:36.895 "strip_size_kb": 0, 00:12:36.895 "state": "online", 00:12:36.895 "raid_level": "raid1", 00:12:36.895 "superblock": false, 00:12:36.895 "num_base_bdevs": 2, 00:12:36.895 "num_base_bdevs_discovered": 2, 00:12:36.895 "num_base_bdevs_operational": 2, 00:12:36.895 "process": { 00:12:36.895 "type": "rebuild", 00:12:36.895 "target": "spare", 00:12:36.895 "progress": { 00:12:36.895 "blocks": 47104, 00:12:36.895 "percent": 71 00:12:36.895 } 00:12:36.895 }, 00:12:36.895 "base_bdevs_list": [ 00:12:36.895 { 00:12:36.895 "name": "spare", 00:12:36.895 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:36.895 "is_configured": true, 00:12:36.895 "data_offset": 0, 00:12:36.895 "data_size": 65536 00:12:36.895 }, 00:12:36.895 { 00:12:36.895 "name": "BaseBdev2", 00:12:36.895 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:36.895 "is_configured": true, 00:12:36.895 "data_offset": 0, 00:12:36.895 "data_size": 65536 00:12:36.895 } 00:12:36.895 ] 00:12:36.895 }' 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.895 10:41:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:37.412 113.00 IOPS, 339.00 MiB/s [2024-11-18T10:41:03.297Z] [2024-11-18 10:41:03.155396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:37.672 [2024-11-18 10:41:03.484715] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:37.932 [2024-11-18 10:41:03.589918] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:37.932 [2024-11-18 10:41:03.592589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.932 "name": "raid_bdev1", 00:12:37.932 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:37.932 "strip_size_kb": 0, 00:12:37.932 "state": "online", 00:12:37.932 "raid_level": "raid1", 00:12:37.932 "superblock": false, 00:12:37.932 "num_base_bdevs": 2, 00:12:37.932 "num_base_bdevs_discovered": 2, 00:12:37.932 "num_base_bdevs_operational": 2, 00:12:37.932 "base_bdevs_list": [ 00:12:37.932 { 00:12:37.932 "name": "spare", 00:12:37.932 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:37.932 "is_configured": true, 00:12:37.932 "data_offset": 0, 00:12:37.932 "data_size": 65536 00:12:37.932 }, 00:12:37.932 { 00:12:37.932 "name": "BaseBdev2", 00:12:37.932 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:37.932 "is_configured": true, 00:12:37.932 "data_offset": 0, 00:12:37.932 "data_size": 65536 00:12:37.932 } 00:12:37.932 ] 00:12:37.932 }' 00:12:37.932 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.192 101.57 IOPS, 304.71 MiB/s [2024-11-18T10:41:04.077Z] 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.192 "name": "raid_bdev1", 00:12:38.192 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:38.192 "strip_size_kb": 0, 00:12:38.192 "state": "online", 00:12:38.192 "raid_level": "raid1", 00:12:38.192 "superblock": false, 00:12:38.192 "num_base_bdevs": 2, 00:12:38.192 "num_base_bdevs_discovered": 2, 00:12:38.192 "num_base_bdevs_operational": 2, 00:12:38.192 "base_bdevs_list": [ 00:12:38.192 { 00:12:38.192 "name": "spare", 00:12:38.192 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:38.192 "is_configured": true, 00:12:38.192 "data_offset": 0, 00:12:38.192 "data_size": 65536 00:12:38.192 }, 00:12:38.192 { 00:12:38.192 "name": "BaseBdev2", 00:12:38.192 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:38.192 "is_configured": true, 00:12:38.192 "data_offset": 0, 00:12:38.192 "data_size": 65536 00:12:38.192 } 00:12:38.192 ] 00:12:38.192 }' 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.192 10:41:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.192 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.192 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.192 "name": "raid_bdev1", 00:12:38.192 "uuid": "41536220-1174-4614-b93f-6ee571c4cd5a", 00:12:38.192 "strip_size_kb": 0, 00:12:38.192 "state": "online", 00:12:38.192 "raid_level": "raid1", 00:12:38.192 "superblock": false, 00:12:38.192 "num_base_bdevs": 2, 00:12:38.192 "num_base_bdevs_discovered": 2, 00:12:38.192 "num_base_bdevs_operational": 2, 00:12:38.192 "base_bdevs_list": [ 00:12:38.192 { 00:12:38.192 "name": "spare", 00:12:38.192 "uuid": "bc81de1e-c7df-5fad-a503-78957abece60", 00:12:38.192 "is_configured": true, 00:12:38.192 "data_offset": 0, 00:12:38.192 "data_size": 65536 00:12:38.192 }, 00:12:38.192 { 00:12:38.192 "name": "BaseBdev2", 00:12:38.192 "uuid": "654bf130-3c2c-5d25-8fcd-14fb090f1d02", 00:12:38.192 "is_configured": true, 00:12:38.192 "data_offset": 0, 00:12:38.192 "data_size": 65536 00:12:38.192 } 00:12:38.192 ] 00:12:38.192 }' 00:12:38.192 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.192 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.781 [2024-11-18 10:41:04.365233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.781 [2024-11-18 10:41:04.365265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.781 00:12:38.781 Latency(us) 00:12:38.781 [2024-11-18T10:41:04.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.781 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:38.781 raid_bdev1 : 7.67 97.94 293.81 0.00 0.00 14472.71 304.07 112183.90 00:12:38.781 [2024-11-18T10:41:04.666Z] =================================================================================================================== 00:12:38.781 [2024-11-18T10:41:04.666Z] Total : 97.94 293.81 0.00 0.00 14472.71 304.07 112183.90 00:12:38.781 [2024-11-18 10:41:04.473028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.781 [2024-11-18 10:41:04.473074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.781 [2024-11-18 10:41:04.473147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.781 [2024-11-18 10:41:04.473166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:38.781 { 00:12:38.781 "results": [ 00:12:38.781 { 00:12:38.781 "job": "raid_bdev1", 00:12:38.781 "core_mask": "0x1", 00:12:38.781 "workload": "randrw", 00:12:38.781 "percentage": 50, 00:12:38.781 "status": "finished", 00:12:38.781 "queue_depth": 2, 00:12:38.781 "io_size": 3145728, 00:12:38.781 "runtime": 7.668195, 00:12:38.781 "iops": 97.9369982114435, 00:12:38.781 "mibps": 293.8109946343305, 00:12:38.781 "io_failed": 0, 00:12:38.781 "io_timeout": 0, 00:12:38.781 "avg_latency_us": 14472.709975055095, 00:12:38.781 "min_latency_us": 304.0698689956332, 00:12:38.781 "max_latency_us": 112183.89519650655 00:12:38.781 } 00:12:38.781 ], 00:12:38.781 "core_count": 1 00:12:38.781 } 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:38.781 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:39.040 /dev/nbd0 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.040 1+0 records in 00:12:39.040 1+0 records out 00:12:39.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437275 s, 9.4 MB/s 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.040 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:39.300 /dev/nbd1 00:12:39.300 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:39.300 10:41:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:39.300 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:39.300 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:39.300 10:41:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.300 1+0 records in 00:12:39.300 1+0 records out 00:12:39.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343165 s, 11.9 MB/s 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.300 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.560 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76302 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76302 ']' 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76302 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76302 00:12:39.820 killing process with pid 76302 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76302' 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76302 00:12:39.820 Received shutdown signal, test time was about 8.837022 seconds 00:12:39.820 00:12:39.820 Latency(us) 00:12:39.820 [2024-11-18T10:41:05.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.820 [2024-11-18T10:41:05.705Z] =================================================================================================================== 00:12:39.820 [2024-11-18T10:41:05.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:39.820 10:41:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76302 00:12:39.820 [2024-11-18 10:41:05.619584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.079 [2024-11-18 10:41:05.850234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.461 10:41:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:41.461 00:12:41.461 real 0m11.895s 00:12:41.461 user 0m14.905s 00:12:41.461 sys 0m1.440s 00:12:41.461 10:41:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.461 10:41:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.461 ************************************ 00:12:41.461 END TEST raid_rebuild_test_io 00:12:41.461 ************************************ 00:12:41.461 10:41:07 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:41.461 10:41:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:41.461 10:41:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.461 10:41:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.461 ************************************ 00:12:41.461 START TEST raid_rebuild_test_sb_io 00:12:41.461 ************************************ 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76678 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76678 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76678 ']' 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.461 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.461 Zero copy mechanism will not be used. 00:12:41.461 [2024-11-18 10:41:07.120374] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:41.461 [2024-11-18 10:41:07.120504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76678 ] 00:12:41.461 [2024-11-18 10:41:07.292118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.721 [2024-11-18 10:41:07.402033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.721 [2024-11-18 10:41:07.597105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.721 [2024-11-18 10:41:07.597168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 BaseBdev1_malloc 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 [2024-11-18 10:41:07.983315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.290 [2024-11-18 10:41:07.983391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.290 [2024-11-18 10:41:07.983426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.290 [2024-11-18 10:41:07.983442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.290 [2024-11-18 10:41:07.985670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.290 [2024-11-18 10:41:07.985713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.290 BaseBdev1 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 BaseBdev2_malloc 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 [2024-11-18 10:41:08.028982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:42.290 [2024-11-18 10:41:08.029043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.290 [2024-11-18 10:41:08.029068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.290 [2024-11-18 10:41:08.029085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.290 [2024-11-18 10:41:08.031087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.290 [2024-11-18 10:41:08.031132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.290 BaseBdev2 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 spare_malloc 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 spare_delay 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 [2024-11-18 10:41:08.118379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.290 [2024-11-18 10:41:08.118463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.290 [2024-11-18 10:41:08.118488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:42.290 [2024-11-18 10:41:08.118503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.290 [2024-11-18 10:41:08.120581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.290 [2024-11-18 10:41:08.120619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.290 spare 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 [2024-11-18 10:41:08.126423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.290 [2024-11-18 10:41:08.128153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.290 [2024-11-18 10:41:08.128337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.290 [2024-11-18 10:41:08.128354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.290 [2024-11-18 10:41:08.128582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:42.290 [2024-11-18 10:41:08.128757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.290 [2024-11-18 10:41:08.128773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.290 [2024-11-18 10:41:08.128925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.550 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.550 "name": "raid_bdev1", 00:12:42.550 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:42.550 "strip_size_kb": 0, 00:12:42.550 "state": "online", 00:12:42.550 "raid_level": "raid1", 00:12:42.550 "superblock": true, 00:12:42.550 "num_base_bdevs": 2, 00:12:42.550 "num_base_bdevs_discovered": 2, 00:12:42.550 "num_base_bdevs_operational": 2, 00:12:42.550 "base_bdevs_list": [ 00:12:42.550 { 00:12:42.550 "name": "BaseBdev1", 00:12:42.550 "uuid": "1f988fda-f9bf-5d31-8607-360731dae026", 00:12:42.550 "is_configured": true, 00:12:42.550 "data_offset": 2048, 00:12:42.550 "data_size": 63488 00:12:42.550 }, 00:12:42.550 { 00:12:42.550 "name": "BaseBdev2", 00:12:42.550 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:42.550 "is_configured": true, 00:12:42.550 "data_offset": 2048, 00:12:42.550 "data_size": 63488 00:12:42.550 } 00:12:42.550 ] 00:12:42.550 }' 00:12:42.550 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.550 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.810 [2024-11-18 10:41:08.545957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:42.810 [2024-11-18 10:41:08.621547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.810 "name": "raid_bdev1", 00:12:42.810 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:42.810 "strip_size_kb": 0, 00:12:42.810 "state": "online", 00:12:42.810 "raid_level": "raid1", 00:12:42.810 "superblock": true, 00:12:42.810 "num_base_bdevs": 2, 00:12:42.810 "num_base_bdevs_discovered": 1, 00:12:42.810 "num_base_bdevs_operational": 1, 00:12:42.810 "base_bdevs_list": [ 00:12:42.810 { 00:12:42.810 "name": null, 00:12:42.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.810 "is_configured": false, 00:12:42.810 "data_offset": 0, 00:12:42.810 "data_size": 63488 00:12:42.810 }, 00:12:42.810 { 00:12:42.810 "name": "BaseBdev2", 00:12:42.810 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:42.810 "is_configured": true, 00:12:42.810 "data_offset": 2048, 00:12:42.810 "data_size": 63488 00:12:42.810 } 00:12:42.810 ] 00:12:42.810 }' 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.810 10:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.069 [2024-11-18 10:41:08.720934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:43.069 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:43.069 Zero copy mechanism will not be used. 00:12:43.069 Running I/O for 60 seconds... 00:12:43.328 10:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.328 10:41:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 10:41:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 [2024-11-18 10:41:09.039375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.328 10:41:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 10:41:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:43.328 [2024-11-18 10:41:09.084263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:43.328 [2024-11-18 10:41:09.086078] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.328 [2024-11-18 10:41:09.194581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.328 [2024-11-18 10:41:09.194989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.586 [2024-11-18 10:41:09.323939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:43.586 [2024-11-18 10:41:09.324198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:43.845 [2024-11-18 10:41:09.647457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:44.104 174.00 IOPS, 522.00 MiB/s [2024-11-18T10:41:09.989Z] [2024-11-18 10:41:09.855725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.363 "name": "raid_bdev1", 00:12:44.363 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:44.363 "strip_size_kb": 0, 00:12:44.363 "state": "online", 00:12:44.363 "raid_level": "raid1", 00:12:44.363 "superblock": true, 00:12:44.363 "num_base_bdevs": 2, 00:12:44.363 "num_base_bdevs_discovered": 2, 00:12:44.363 "num_base_bdevs_operational": 2, 00:12:44.363 "process": { 00:12:44.363 "type": "rebuild", 00:12:44.363 "target": "spare", 00:12:44.363 "progress": { 00:12:44.363 "blocks": 12288, 00:12:44.363 "percent": 19 00:12:44.363 } 00:12:44.363 }, 00:12:44.363 "base_bdevs_list": [ 00:12:44.363 { 00:12:44.363 "name": "spare", 00:12:44.363 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:44.363 "is_configured": true, 00:12:44.363 "data_offset": 2048, 00:12:44.363 "data_size": 63488 00:12:44.363 }, 00:12:44.363 { 00:12:44.363 "name": "BaseBdev2", 00:12:44.363 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:44.363 "is_configured": true, 00:12:44.363 "data_offset": 2048, 00:12:44.363 "data_size": 63488 00:12:44.363 } 00:12:44.363 ] 00:12:44.363 }' 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.363 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.363 [2024-11-18 10:41:10.211632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.624 [2024-11-18 10:41:10.304267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.624 [2024-11-18 10:41:10.318103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.624 [2024-11-18 10:41:10.318158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.624 [2024-11-18 10:41:10.318184] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:44.624 [2024-11-18 10:41:10.360673] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.624 "name": "raid_bdev1", 00:12:44.624 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:44.624 "strip_size_kb": 0, 00:12:44.624 "state": "online", 00:12:44.624 "raid_level": "raid1", 00:12:44.624 "superblock": true, 00:12:44.624 "num_base_bdevs": 2, 00:12:44.624 "num_base_bdevs_discovered": 1, 00:12:44.624 "num_base_bdevs_operational": 1, 00:12:44.624 "base_bdevs_list": [ 00:12:44.624 { 00:12:44.624 "name": null, 00:12:44.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.624 "is_configured": false, 00:12:44.624 "data_offset": 0, 00:12:44.624 "data_size": 63488 00:12:44.624 }, 00:12:44.624 { 00:12:44.624 "name": "BaseBdev2", 00:12:44.624 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:44.624 "is_configured": true, 00:12:44.624 "data_offset": 2048, 00:12:44.624 "data_size": 63488 00:12:44.624 } 00:12:44.624 ] 00:12:44.624 }' 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.624 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.144 179.50 IOPS, 538.50 MiB/s [2024-11-18T10:41:11.029Z] 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.144 "name": "raid_bdev1", 00:12:45.144 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:45.144 "strip_size_kb": 0, 00:12:45.144 "state": "online", 00:12:45.144 "raid_level": "raid1", 00:12:45.144 "superblock": true, 00:12:45.144 "num_base_bdevs": 2, 00:12:45.144 "num_base_bdevs_discovered": 1, 00:12:45.144 "num_base_bdevs_operational": 1, 00:12:45.144 "base_bdevs_list": [ 00:12:45.144 { 00:12:45.144 "name": null, 00:12:45.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.144 "is_configured": false, 00:12:45.144 "data_offset": 0, 00:12:45.144 "data_size": 63488 00:12:45.144 }, 00:12:45.144 { 00:12:45.144 "name": "BaseBdev2", 00:12:45.144 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:45.144 "is_configured": true, 00:12:45.144 "data_offset": 2048, 00:12:45.144 "data_size": 63488 00:12:45.144 } 00:12:45.144 ] 00:12:45.144 }' 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.144 10:41:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.144 [2024-11-18 10:41:10.967582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.144 10:41:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.144 10:41:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:45.144 [2024-11-18 10:41:11.019405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:45.144 [2024-11-18 10:41:11.021251] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.404 [2024-11-18 10:41:11.132866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:45.404 [2024-11-18 10:41:11.133341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:45.663 [2024-11-18 10:41:11.351090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:45.663 [2024-11-18 10:41:11.351367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:45.927 173.00 IOPS, 519.00 MiB/s [2024-11-18T10:41:11.812Z] [2024-11-18 10:41:11.788894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.927 [2024-11-18 10:41:11.789117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.186 "name": "raid_bdev1", 00:12:46.186 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:46.186 "strip_size_kb": 0, 00:12:46.186 "state": "online", 00:12:46.186 "raid_level": "raid1", 00:12:46.186 "superblock": true, 00:12:46.186 "num_base_bdevs": 2, 00:12:46.186 "num_base_bdevs_discovered": 2, 00:12:46.186 "num_base_bdevs_operational": 2, 00:12:46.186 "process": { 00:12:46.186 "type": "rebuild", 00:12:46.186 "target": "spare", 00:12:46.186 "progress": { 00:12:46.186 "blocks": 12288, 00:12:46.186 "percent": 19 00:12:46.186 } 00:12:46.186 }, 00:12:46.186 "base_bdevs_list": [ 00:12:46.186 { 00:12:46.186 "name": "spare", 00:12:46.186 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:46.186 "is_configured": true, 00:12:46.186 "data_offset": 2048, 00:12:46.186 "data_size": 63488 00:12:46.186 }, 00:12:46.186 { 00:12:46.186 "name": "BaseBdev2", 00:12:46.186 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:46.186 "is_configured": true, 00:12:46.186 "data_offset": 2048, 00:12:46.186 "data_size": 63488 00:12:46.186 } 00:12:46.186 ] 00:12:46.186 }' 00:12:46.186 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.445 [2024-11-18 10:41:12.100073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:46.445 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.445 "name": "raid_bdev1", 00:12:46.445 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:46.445 "strip_size_kb": 0, 00:12:46.445 "state": "online", 00:12:46.445 "raid_level": "raid1", 00:12:46.445 "superblock": true, 00:12:46.445 "num_base_bdevs": 2, 00:12:46.445 "num_base_bdevs_discovered": 2, 00:12:46.445 "num_base_bdevs_operational": 2, 00:12:46.445 "process": { 00:12:46.445 "type": "rebuild", 00:12:46.445 "target": "spare", 00:12:46.445 "progress": { 00:12:46.445 "blocks": 14336, 00:12:46.445 "percent": 22 00:12:46.445 } 00:12:46.445 }, 00:12:46.445 "base_bdevs_list": [ 00:12:46.445 { 00:12:46.445 "name": "spare", 00:12:46.445 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:46.445 "is_configured": true, 00:12:46.445 "data_offset": 2048, 00:12:46.445 "data_size": 63488 00:12:46.445 }, 00:12:46.445 { 00:12:46.445 "name": "BaseBdev2", 00:12:46.445 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:46.445 "is_configured": true, 00:12:46.445 "data_offset": 2048, 00:12:46.445 "data_size": 63488 00:12:46.445 } 00:12:46.445 ] 00:12:46.445 }' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.445 10:41:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.704 [2024-11-18 10:41:12.433153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:46.704 [2024-11-18 10:41:12.540432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:46.964 152.50 IOPS, 457.50 MiB/s [2024-11-18T10:41:12.849Z] [2024-11-18 10:41:12.748737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:47.224 [2024-11-18 10:41:12.855206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.485 "name": "raid_bdev1", 00:12:47.485 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:47.485 "strip_size_kb": 0, 00:12:47.485 "state": "online", 00:12:47.485 "raid_level": "raid1", 00:12:47.485 "superblock": true, 00:12:47.485 "num_base_bdevs": 2, 00:12:47.485 "num_base_bdevs_discovered": 2, 00:12:47.485 "num_base_bdevs_operational": 2, 00:12:47.485 "process": { 00:12:47.485 "type": "rebuild", 00:12:47.485 "target": "spare", 00:12:47.485 "progress": { 00:12:47.485 "blocks": 34816, 00:12:47.485 "percent": 54 00:12:47.485 } 00:12:47.485 }, 00:12:47.485 "base_bdevs_list": [ 00:12:47.485 { 00:12:47.485 "name": "spare", 00:12:47.485 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:47.485 "is_configured": true, 00:12:47.485 "data_offset": 2048, 00:12:47.485 "data_size": 63488 00:12:47.485 }, 00:12:47.485 { 00:12:47.485 "name": "BaseBdev2", 00:12:47.485 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:47.485 "is_configured": true, 00:12:47.485 "data_offset": 2048, 00:12:47.485 "data_size": 63488 00:12:47.485 } 00:12:47.485 ] 00:12:47.485 }' 00:12:47.485 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.744 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.744 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.744 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.744 10:41:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.003 [2024-11-18 10:41:13.629800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:48.263 130.80 IOPS, 392.40 MiB/s [2024-11-18T10:41:14.148Z] [2024-11-18 10:41:13.950275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:48.263 [2024-11-18 10:41:14.057264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.832 "name": "raid_bdev1", 00:12:48.832 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:48.832 "strip_size_kb": 0, 00:12:48.832 "state": "online", 00:12:48.832 "raid_level": "raid1", 00:12:48.832 "superblock": true, 00:12:48.832 "num_base_bdevs": 2, 00:12:48.832 "num_base_bdevs_discovered": 2, 00:12:48.832 "num_base_bdevs_operational": 2, 00:12:48.832 "process": { 00:12:48.832 "type": "rebuild", 00:12:48.832 "target": "spare", 00:12:48.832 "progress": { 00:12:48.832 "blocks": 53248, 00:12:48.832 "percent": 83 00:12:48.832 } 00:12:48.832 }, 00:12:48.832 "base_bdevs_list": [ 00:12:48.832 { 00:12:48.832 "name": "spare", 00:12:48.832 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:48.832 "is_configured": true, 00:12:48.832 "data_offset": 2048, 00:12:48.832 "data_size": 63488 00:12:48.832 }, 00:12:48.832 { 00:12:48.832 "name": "BaseBdev2", 00:12:48.832 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:48.832 "is_configured": true, 00:12:48.832 "data_offset": 2048, 00:12:48.832 "data_size": 63488 00:12:48.832 } 00:12:48.832 ] 00:12:48.832 }' 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.832 10:41:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.092 113.83 IOPS, 341.50 MiB/s [2024-11-18T10:41:14.977Z] [2024-11-18 10:41:14.927426] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:49.354 [2024-11-18 10:41:15.032576] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:49.354 [2024-11-18 10:41:15.035104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.924 "name": "raid_bdev1", 00:12:49.924 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:49.924 "strip_size_kb": 0, 00:12:49.924 "state": "online", 00:12:49.924 "raid_level": "raid1", 00:12:49.924 "superblock": true, 00:12:49.924 "num_base_bdevs": 2, 00:12:49.924 "num_base_bdevs_discovered": 2, 00:12:49.924 "num_base_bdevs_operational": 2, 00:12:49.924 "base_bdevs_list": [ 00:12:49.924 { 00:12:49.924 "name": "spare", 00:12:49.924 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:49.924 "is_configured": true, 00:12:49.924 "data_offset": 2048, 00:12:49.924 "data_size": 63488 00:12:49.924 }, 00:12:49.924 { 00:12:49.924 "name": "BaseBdev2", 00:12:49.924 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:49.924 "is_configured": true, 00:12:49.924 "data_offset": 2048, 00:12:49.924 "data_size": 63488 00:12:49.924 } 00:12:49.924 ] 00:12:49.924 }' 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.924 102.57 IOPS, 307.71 MiB/s [2024-11-18T10:41:15.809Z] 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.924 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.924 "name": "raid_bdev1", 00:12:49.924 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:49.924 "strip_size_kb": 0, 00:12:49.924 "state": "online", 00:12:49.924 "raid_level": "raid1", 00:12:49.924 "superblock": true, 00:12:49.924 "num_base_bdevs": 2, 00:12:49.924 "num_base_bdevs_discovered": 2, 00:12:49.924 "num_base_bdevs_operational": 2, 00:12:49.924 "base_bdevs_list": [ 00:12:49.924 { 00:12:49.924 "name": "spare", 00:12:49.924 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:49.924 "is_configured": true, 00:12:49.924 "data_offset": 2048, 00:12:49.924 "data_size": 63488 00:12:49.924 }, 00:12:49.924 { 00:12:49.924 "name": "BaseBdev2", 00:12:49.924 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:49.924 "is_configured": true, 00:12:49.924 "data_offset": 2048, 00:12:49.924 "data_size": 63488 00:12:49.924 } 00:12:49.924 ] 00:12:49.924 }' 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.185 "name": "raid_bdev1", 00:12:50.185 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:50.185 "strip_size_kb": 0, 00:12:50.185 "state": "online", 00:12:50.185 "raid_level": "raid1", 00:12:50.185 "superblock": true, 00:12:50.185 "num_base_bdevs": 2, 00:12:50.185 "num_base_bdevs_discovered": 2, 00:12:50.185 "num_base_bdevs_operational": 2, 00:12:50.185 "base_bdevs_list": [ 00:12:50.185 { 00:12:50.185 "name": "spare", 00:12:50.185 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:50.185 "is_configured": true, 00:12:50.185 "data_offset": 2048, 00:12:50.185 "data_size": 63488 00:12:50.185 }, 00:12:50.185 { 00:12:50.185 "name": "BaseBdev2", 00:12:50.185 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:50.185 "is_configured": true, 00:12:50.185 "data_offset": 2048, 00:12:50.185 "data_size": 63488 00:12:50.185 } 00:12:50.185 ] 00:12:50.185 }' 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.185 10:41:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.755 [2024-11-18 10:41:16.400804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.755 [2024-11-18 10:41:16.400836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.755 00:12:50.755 Latency(us) 00:12:50.755 [2024-11-18T10:41:16.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.755 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:50.755 raid_bdev1 : 7.73 95.49 286.47 0.00 0.00 14765.77 298.70 112183.90 00:12:50.755 [2024-11-18T10:41:16.640Z] =================================================================================================================== 00:12:50.755 [2024-11-18T10:41:16.640Z] Total : 95.49 286.47 0.00 0.00 14765.77 298.70 112183.90 00:12:50.755 [2024-11-18 10:41:16.457655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.755 [2024-11-18 10:41:16.457701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.755 [2024-11-18 10:41:16.457778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.755 [2024-11-18 10:41:16.457789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:50.755 { 00:12:50.755 "results": [ 00:12:50.755 { 00:12:50.755 "job": "raid_bdev1", 00:12:50.755 "core_mask": "0x1", 00:12:50.755 "workload": "randrw", 00:12:50.755 "percentage": 50, 00:12:50.755 "status": "finished", 00:12:50.755 "queue_depth": 2, 00:12:50.755 "io_size": 3145728, 00:12:50.755 "runtime": 7.728661, 00:12:50.755 "iops": 95.48872696059512, 00:12:50.755 "mibps": 286.46618088178536, 00:12:50.755 "io_failed": 0, 00:12:50.755 "io_timeout": 0, 00:12:50.755 "avg_latency_us": 14765.769531721518, 00:12:50.755 "min_latency_us": 298.70393013100437, 00:12:50.755 "max_latency_us": 112183.89519650655 00:12:50.755 } 00:12:50.755 ], 00:12:50.755 "core_count": 1 00:12:50.755 } 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.755 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:51.015 /dev/nbd0 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.015 1+0 records in 00:12:51.015 1+0 records out 00:12:51.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030398 s, 13.5 MB/s 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.015 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.016 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:51.275 /dev/nbd1 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.275 10:41:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.275 1+0 records in 00:12:51.275 1+0 records out 00:12:51.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267358 s, 15.3 MB/s 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.275 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.535 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.794 [2024-11-18 10:41:17.661618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.794 [2024-11-18 10:41:17.661677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.794 [2024-11-18 10:41:17.661708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:51.794 [2024-11-18 10:41:17.661720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.794 [2024-11-18 10:41:17.664078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.794 [2024-11-18 10:41:17.664121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.794 [2024-11-18 10:41:17.664264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:51.794 [2024-11-18 10:41:17.664334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.794 [2024-11-18 10:41:17.664499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.794 spare 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.794 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.054 [2024-11-18 10:41:17.764412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:52.054 [2024-11-18 10:41:17.764467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.054 [2024-11-18 10:41:17.764782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:52.054 [2024-11-18 10:41:17.765003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:52.054 [2024-11-18 10:41:17.765026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:52.054 [2024-11-18 10:41:17.765234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.054 "name": "raid_bdev1", 00:12:52.054 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:52.054 "strip_size_kb": 0, 00:12:52.054 "state": "online", 00:12:52.054 "raid_level": "raid1", 00:12:52.054 "superblock": true, 00:12:52.054 "num_base_bdevs": 2, 00:12:52.054 "num_base_bdevs_discovered": 2, 00:12:52.054 "num_base_bdevs_operational": 2, 00:12:52.054 "base_bdevs_list": [ 00:12:52.054 { 00:12:52.054 "name": "spare", 00:12:52.054 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:52.054 "is_configured": true, 00:12:52.054 "data_offset": 2048, 00:12:52.054 "data_size": 63488 00:12:52.054 }, 00:12:52.054 { 00:12:52.054 "name": "BaseBdev2", 00:12:52.054 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:52.054 "is_configured": true, 00:12:52.054 "data_offset": 2048, 00:12:52.054 "data_size": 63488 00:12:52.054 } 00:12:52.054 ] 00:12:52.054 }' 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.054 10:41:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.624 "name": "raid_bdev1", 00:12:52.624 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:52.624 "strip_size_kb": 0, 00:12:52.624 "state": "online", 00:12:52.624 "raid_level": "raid1", 00:12:52.624 "superblock": true, 00:12:52.624 "num_base_bdevs": 2, 00:12:52.624 "num_base_bdevs_discovered": 2, 00:12:52.624 "num_base_bdevs_operational": 2, 00:12:52.624 "base_bdevs_list": [ 00:12:52.624 { 00:12:52.624 "name": "spare", 00:12:52.624 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:52.624 "is_configured": true, 00:12:52.624 "data_offset": 2048, 00:12:52.624 "data_size": 63488 00:12:52.624 }, 00:12:52.624 { 00:12:52.624 "name": "BaseBdev2", 00:12:52.624 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:52.624 "is_configured": true, 00:12:52.624 "data_offset": 2048, 00:12:52.624 "data_size": 63488 00:12:52.624 } 00:12:52.624 ] 00:12:52.624 }' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.624 [2024-11-18 10:41:18.392520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.624 "name": "raid_bdev1", 00:12:52.624 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:52.624 "strip_size_kb": 0, 00:12:52.624 "state": "online", 00:12:52.624 "raid_level": "raid1", 00:12:52.624 "superblock": true, 00:12:52.624 "num_base_bdevs": 2, 00:12:52.624 "num_base_bdevs_discovered": 1, 00:12:52.624 "num_base_bdevs_operational": 1, 00:12:52.624 "base_bdevs_list": [ 00:12:52.624 { 00:12:52.624 "name": null, 00:12:52.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.624 "is_configured": false, 00:12:52.624 "data_offset": 0, 00:12:52.624 "data_size": 63488 00:12:52.624 }, 00:12:52.624 { 00:12:52.624 "name": "BaseBdev2", 00:12:52.624 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:52.624 "is_configured": true, 00:12:52.624 "data_offset": 2048, 00:12:52.624 "data_size": 63488 00:12:52.624 } 00:12:52.624 ] 00:12:52.624 }' 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.624 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.194 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.194 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.194 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.194 [2024-11-18 10:41:18.863774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.194 [2024-11-18 10:41:18.863983] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:53.194 [2024-11-18 10:41:18.864006] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:53.194 [2024-11-18 10:41:18.864044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.194 [2024-11-18 10:41:18.882422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:53.194 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.194 10:41:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:53.194 [2024-11-18 10:41:18.884551] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.135 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.135 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.135 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.135 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.135 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.136 "name": "raid_bdev1", 00:12:54.136 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:54.136 "strip_size_kb": 0, 00:12:54.136 "state": "online", 00:12:54.136 "raid_level": "raid1", 00:12:54.136 "superblock": true, 00:12:54.136 "num_base_bdevs": 2, 00:12:54.136 "num_base_bdevs_discovered": 2, 00:12:54.136 "num_base_bdevs_operational": 2, 00:12:54.136 "process": { 00:12:54.136 "type": "rebuild", 00:12:54.136 "target": "spare", 00:12:54.136 "progress": { 00:12:54.136 "blocks": 20480, 00:12:54.136 "percent": 32 00:12:54.136 } 00:12:54.136 }, 00:12:54.136 "base_bdevs_list": [ 00:12:54.136 { 00:12:54.136 "name": "spare", 00:12:54.136 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:54.136 "is_configured": true, 00:12:54.136 "data_offset": 2048, 00:12:54.136 "data_size": 63488 00:12:54.136 }, 00:12:54.136 { 00:12:54.136 "name": "BaseBdev2", 00:12:54.136 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:54.136 "is_configured": true, 00:12:54.136 "data_offset": 2048, 00:12:54.136 "data_size": 63488 00:12:54.136 } 00:12:54.136 ] 00:12:54.136 }' 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.136 10:41:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 [2024-11-18 10:41:20.040461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.394 [2024-11-18 10:41:20.089288] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.394 [2024-11-18 10:41:20.089354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.394 [2024-11-18 10:41:20.089372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.394 [2024-11-18 10:41:20.089384] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.394 "name": "raid_bdev1", 00:12:54.394 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:54.394 "strip_size_kb": 0, 00:12:54.394 "state": "online", 00:12:54.394 "raid_level": "raid1", 00:12:54.394 "superblock": true, 00:12:54.394 "num_base_bdevs": 2, 00:12:54.394 "num_base_bdevs_discovered": 1, 00:12:54.394 "num_base_bdevs_operational": 1, 00:12:54.394 "base_bdevs_list": [ 00:12:54.394 { 00:12:54.394 "name": null, 00:12:54.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.394 "is_configured": false, 00:12:54.394 "data_offset": 0, 00:12:54.394 "data_size": 63488 00:12:54.394 }, 00:12:54.394 { 00:12:54.394 "name": "BaseBdev2", 00:12:54.394 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:54.394 "is_configured": true, 00:12:54.394 "data_offset": 2048, 00:12:54.394 "data_size": 63488 00:12:54.394 } 00:12:54.394 ] 00:12:54.394 }' 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.394 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.962 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.962 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.962 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.962 [2024-11-18 10:41:20.585468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.962 [2024-11-18 10:41:20.585550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.962 [2024-11-18 10:41:20.585582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:54.962 [2024-11-18 10:41:20.585598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.962 [2024-11-18 10:41:20.586143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.962 [2024-11-18 10:41:20.586199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.962 [2024-11-18 10:41:20.586336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:54.962 [2024-11-18 10:41:20.586364] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:54.962 [2024-11-18 10:41:20.586374] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:54.962 [2024-11-18 10:41:20.586410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.962 [2024-11-18 10:41:20.601678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:54.962 spare 00:12:54.962 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.962 10:41:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:54.962 [2024-11-18 10:41:20.603518] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.901 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.901 "name": "raid_bdev1", 00:12:55.901 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:55.901 "strip_size_kb": 0, 00:12:55.901 "state": "online", 00:12:55.901 "raid_level": "raid1", 00:12:55.901 "superblock": true, 00:12:55.901 "num_base_bdevs": 2, 00:12:55.901 "num_base_bdevs_discovered": 2, 00:12:55.902 "num_base_bdevs_operational": 2, 00:12:55.902 "process": { 00:12:55.902 "type": "rebuild", 00:12:55.902 "target": "spare", 00:12:55.902 "progress": { 00:12:55.902 "blocks": 20480, 00:12:55.902 "percent": 32 00:12:55.902 } 00:12:55.902 }, 00:12:55.902 "base_bdevs_list": [ 00:12:55.902 { 00:12:55.902 "name": "spare", 00:12:55.902 "uuid": "7cf13416-30d6-5fa1-92a5-2b5298dc82ea", 00:12:55.902 "is_configured": true, 00:12:55.902 "data_offset": 2048, 00:12:55.902 "data_size": 63488 00:12:55.902 }, 00:12:55.902 { 00:12:55.902 "name": "BaseBdev2", 00:12:55.902 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:55.902 "is_configured": true, 00:12:55.902 "data_offset": 2048, 00:12:55.902 "data_size": 63488 00:12:55.902 } 00:12:55.902 ] 00:12:55.902 }' 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.902 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.902 [2024-11-18 10:41:21.759306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.162 [2024-11-18 10:41:21.808168] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.162 [2024-11-18 10:41:21.808252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.162 [2024-11-18 10:41:21.808274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.162 [2024-11-18 10:41:21.808284] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.162 "name": "raid_bdev1", 00:12:56.162 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:56.162 "strip_size_kb": 0, 00:12:56.162 "state": "online", 00:12:56.162 "raid_level": "raid1", 00:12:56.162 "superblock": true, 00:12:56.162 "num_base_bdevs": 2, 00:12:56.162 "num_base_bdevs_discovered": 1, 00:12:56.162 "num_base_bdevs_operational": 1, 00:12:56.162 "base_bdevs_list": [ 00:12:56.162 { 00:12:56.162 "name": null, 00:12:56.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.162 "is_configured": false, 00:12:56.162 "data_offset": 0, 00:12:56.162 "data_size": 63488 00:12:56.162 }, 00:12:56.162 { 00:12:56.162 "name": "BaseBdev2", 00:12:56.162 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:56.162 "is_configured": true, 00:12:56.162 "data_offset": 2048, 00:12:56.162 "data_size": 63488 00:12:56.162 } 00:12:56.162 ] 00:12:56.162 }' 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.162 10:41:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.421 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.680 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.680 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.680 "name": "raid_bdev1", 00:12:56.680 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:56.680 "strip_size_kb": 0, 00:12:56.680 "state": "online", 00:12:56.680 "raid_level": "raid1", 00:12:56.680 "superblock": true, 00:12:56.680 "num_base_bdevs": 2, 00:12:56.680 "num_base_bdevs_discovered": 1, 00:12:56.680 "num_base_bdevs_operational": 1, 00:12:56.680 "base_bdevs_list": [ 00:12:56.680 { 00:12:56.680 "name": null, 00:12:56.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.680 "is_configured": false, 00:12:56.680 "data_offset": 0, 00:12:56.680 "data_size": 63488 00:12:56.680 }, 00:12:56.680 { 00:12:56.680 "name": "BaseBdev2", 00:12:56.680 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:56.680 "is_configured": true, 00:12:56.680 "data_offset": 2048, 00:12:56.680 "data_size": 63488 00:12:56.680 } 00:12:56.680 ] 00:12:56.680 }' 00:12:56.680 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.680 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.680 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.680 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.681 [2024-11-18 10:41:22.454976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.681 [2024-11-18 10:41:22.455026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.681 [2024-11-18 10:41:22.455054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:56.681 [2024-11-18 10:41:22.455065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.681 [2024-11-18 10:41:22.455565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.681 [2024-11-18 10:41:22.455597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.681 [2024-11-18 10:41:22.455694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:56.681 [2024-11-18 10:41:22.455721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:56.681 [2024-11-18 10:41:22.455735] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:56.681 [2024-11-18 10:41:22.455747] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:56.681 BaseBdev1 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.681 10:41:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.618 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.619 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.619 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.619 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.877 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.877 "name": "raid_bdev1", 00:12:57.877 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:57.877 "strip_size_kb": 0, 00:12:57.877 "state": "online", 00:12:57.877 "raid_level": "raid1", 00:12:57.877 "superblock": true, 00:12:57.877 "num_base_bdevs": 2, 00:12:57.877 "num_base_bdevs_discovered": 1, 00:12:57.877 "num_base_bdevs_operational": 1, 00:12:57.877 "base_bdevs_list": [ 00:12:57.877 { 00:12:57.877 "name": null, 00:12:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.877 "is_configured": false, 00:12:57.877 "data_offset": 0, 00:12:57.877 "data_size": 63488 00:12:57.877 }, 00:12:57.877 { 00:12:57.877 "name": "BaseBdev2", 00:12:57.877 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:57.877 "is_configured": true, 00:12:57.877 "data_offset": 2048, 00:12:57.877 "data_size": 63488 00:12:57.877 } 00:12:57.877 ] 00:12:57.877 }' 00:12:57.877 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.877 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.136 "name": "raid_bdev1", 00:12:58.136 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:58.136 "strip_size_kb": 0, 00:12:58.136 "state": "online", 00:12:58.136 "raid_level": "raid1", 00:12:58.136 "superblock": true, 00:12:58.136 "num_base_bdevs": 2, 00:12:58.136 "num_base_bdevs_discovered": 1, 00:12:58.136 "num_base_bdevs_operational": 1, 00:12:58.136 "base_bdevs_list": [ 00:12:58.136 { 00:12:58.136 "name": null, 00:12:58.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.136 "is_configured": false, 00:12:58.136 "data_offset": 0, 00:12:58.136 "data_size": 63488 00:12:58.136 }, 00:12:58.136 { 00:12:58.136 "name": "BaseBdev2", 00:12:58.136 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:58.136 "is_configured": true, 00:12:58.136 "data_offset": 2048, 00:12:58.136 "data_size": 63488 00:12:58.136 } 00:12:58.136 ] 00:12:58.136 }' 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.136 10:41:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.396 [2024-11-18 10:41:24.036459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.396 [2024-11-18 10:41:24.036606] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:58.396 [2024-11-18 10:41:24.036627] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.396 request: 00:12:58.396 { 00:12:58.396 "base_bdev": "BaseBdev1", 00:12:58.396 "raid_bdev": "raid_bdev1", 00:12:58.396 "method": "bdev_raid_add_base_bdev", 00:12:58.396 "req_id": 1 00:12:58.396 } 00:12:58.396 Got JSON-RPC error response 00:12:58.396 response: 00:12:58.396 { 00:12:58.396 "code": -22, 00:12:58.396 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.396 } 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.396 10:41:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.336 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.336 "name": "raid_bdev1", 00:12:59.336 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:59.336 "strip_size_kb": 0, 00:12:59.336 "state": "online", 00:12:59.336 "raid_level": "raid1", 00:12:59.336 "superblock": true, 00:12:59.336 "num_base_bdevs": 2, 00:12:59.336 "num_base_bdevs_discovered": 1, 00:12:59.336 "num_base_bdevs_operational": 1, 00:12:59.336 "base_bdevs_list": [ 00:12:59.336 { 00:12:59.336 "name": null, 00:12:59.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.336 "is_configured": false, 00:12:59.336 "data_offset": 0, 00:12:59.336 "data_size": 63488 00:12:59.337 }, 00:12:59.337 { 00:12:59.337 "name": "BaseBdev2", 00:12:59.337 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:59.337 "is_configured": true, 00:12:59.337 "data_offset": 2048, 00:12:59.337 "data_size": 63488 00:12:59.337 } 00:12:59.337 ] 00:12:59.337 }' 00:12:59.337 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.337 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.905 "name": "raid_bdev1", 00:12:59.905 "uuid": "c9a713d4-ca05-425f-9ebb-150b0cdda6e3", 00:12:59.905 "strip_size_kb": 0, 00:12:59.905 "state": "online", 00:12:59.905 "raid_level": "raid1", 00:12:59.905 "superblock": true, 00:12:59.905 "num_base_bdevs": 2, 00:12:59.905 "num_base_bdevs_discovered": 1, 00:12:59.905 "num_base_bdevs_operational": 1, 00:12:59.905 "base_bdevs_list": [ 00:12:59.905 { 00:12:59.905 "name": null, 00:12:59.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.905 "is_configured": false, 00:12:59.905 "data_offset": 0, 00:12:59.905 "data_size": 63488 00:12:59.905 }, 00:12:59.905 { 00:12:59.905 "name": "BaseBdev2", 00:12:59.905 "uuid": "41c7d90d-1be5-5cbf-915f-dec77d8efd1d", 00:12:59.905 "is_configured": true, 00:12:59.905 "data_offset": 2048, 00:12:59.905 "data_size": 63488 00:12:59.905 } 00:12:59.905 ] 00:12:59.905 }' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76678 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76678 ']' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76678 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76678 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76678' 00:12:59.905 killing process with pid 76678 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76678 00:12:59.905 Received shutdown signal, test time was about 16.949784 seconds 00:12:59.905 00:12:59.905 Latency(us) 00:12:59.905 [2024-11-18T10:41:25.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.905 [2024-11-18T10:41:25.790Z] =================================================================================================================== 00:12:59.905 [2024-11-18T10:41:25.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:59.905 [2024-11-18 10:41:25.640015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.905 [2024-11-18 10:41:25.640151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.905 [2024-11-18 10:41:25.640219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.905 [2024-11-18 10:41:25.640232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:59.905 10:41:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76678 00:13:00.165 [2024-11-18 10:41:25.853885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.108 ************************************ 00:13:01.108 END TEST raid_rebuild_test_sb_io 00:13:01.108 ************************************ 00:13:01.108 10:41:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:01.108 00:13:01.108 real 0m19.898s 00:13:01.108 user 0m26.105s 00:13:01.108 sys 0m2.155s 00:13:01.108 10:41:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.108 10:41:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.108 10:41:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:01.108 10:41:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:01.108 10:41:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:01.108 10:41:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.108 10:41:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.108 ************************************ 00:13:01.108 START TEST raid_rebuild_test 00:13:01.108 ************************************ 00:13:01.108 10:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.368 10:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.368 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:01.368 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77361 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77361 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77361 ']' 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.369 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.369 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.369 Zero copy mechanism will not be used. 00:13:01.369 [2024-11-18 10:41:27.099089] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:01.369 [2024-11-18 10:41:27.099222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77361 ] 00:13:01.628 [2024-11-18 10:41:27.277785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.628 [2024-11-18 10:41:27.382966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.888 [2024-11-18 10:41:27.558442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.888 [2024-11-18 10:41:27.558484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.148 BaseBdev1_malloc 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.148 [2024-11-18 10:41:27.995271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.148 [2024-11-18 10:41:27.995354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.148 [2024-11-18 10:41:27.995380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:02.148 [2024-11-18 10:41:27.995391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.148 [2024-11-18 10:41:27.997410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.148 [2024-11-18 10:41:27.997447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.148 BaseBdev1 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.148 10:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.148 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.408 BaseBdev2_malloc 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.408 [2024-11-18 10:41:28.049748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:02.408 [2024-11-18 10:41:28.049820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.408 [2024-11-18 10:41:28.049838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:02.408 [2024-11-18 10:41:28.049848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.408 [2024-11-18 10:41:28.051800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.408 [2024-11-18 10:41:28.051839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.408 BaseBdev2 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.408 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 BaseBdev3_malloc 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 [2024-11-18 10:41:28.140022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:02.409 [2024-11-18 10:41:28.140071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.409 [2024-11-18 10:41:28.140093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:02.409 [2024-11-18 10:41:28.140103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.409 [2024-11-18 10:41:28.142058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.409 [2024-11-18 10:41:28.142114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:02.409 BaseBdev3 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 BaseBdev4_malloc 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 [2024-11-18 10:41:28.193292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:02.409 [2024-11-18 10:41:28.193356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.409 [2024-11-18 10:41:28.193373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:02.409 [2024-11-18 10:41:28.193383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.409 [2024-11-18 10:41:28.195299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.409 [2024-11-18 10:41:28.195338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:02.409 BaseBdev4 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 spare_malloc 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 spare_delay 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 [2024-11-18 10:41:28.254988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:02.409 [2024-11-18 10:41:28.255043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.409 [2024-11-18 10:41:28.255061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:02.409 [2024-11-18 10:41:28.255071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.409 [2024-11-18 10:41:28.257043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.409 [2024-11-18 10:41:28.257080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:02.409 spare 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.409 [2024-11-18 10:41:28.267007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.409 [2024-11-18 10:41:28.268715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.409 [2024-11-18 10:41:28.268784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.409 [2024-11-18 10:41:28.268834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:02.409 [2024-11-18 10:41:28.268908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:02.409 [2024-11-18 10:41:28.268920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:02.409 [2024-11-18 10:41:28.269150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:02.409 [2024-11-18 10:41:28.269339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:02.409 [2024-11-18 10:41:28.269358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:02.409 [2024-11-18 10:41:28.269497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.409 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.670 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.670 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.670 "name": "raid_bdev1", 00:13:02.670 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:02.670 "strip_size_kb": 0, 00:13:02.670 "state": "online", 00:13:02.670 "raid_level": "raid1", 00:13:02.670 "superblock": false, 00:13:02.670 "num_base_bdevs": 4, 00:13:02.670 "num_base_bdevs_discovered": 4, 00:13:02.670 "num_base_bdevs_operational": 4, 00:13:02.670 "base_bdevs_list": [ 00:13:02.670 { 00:13:02.670 "name": "BaseBdev1", 00:13:02.670 "uuid": "836554f2-dc13-5fe9-8d83-aff48e51055e", 00:13:02.670 "is_configured": true, 00:13:02.670 "data_offset": 0, 00:13:02.670 "data_size": 65536 00:13:02.670 }, 00:13:02.670 { 00:13:02.670 "name": "BaseBdev2", 00:13:02.670 "uuid": "11d7caf4-8fe8-57ae-9ff5-ecf413bd91db", 00:13:02.670 "is_configured": true, 00:13:02.670 "data_offset": 0, 00:13:02.670 "data_size": 65536 00:13:02.670 }, 00:13:02.670 { 00:13:02.670 "name": "BaseBdev3", 00:13:02.670 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:02.670 "is_configured": true, 00:13:02.670 "data_offset": 0, 00:13:02.670 "data_size": 65536 00:13:02.670 }, 00:13:02.670 { 00:13:02.670 "name": "BaseBdev4", 00:13:02.670 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:02.670 "is_configured": true, 00:13:02.670 "data_offset": 0, 00:13:02.670 "data_size": 65536 00:13:02.670 } 00:13:02.670 ] 00:13:02.670 }' 00:13:02.670 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.670 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.930 [2024-11-18 10:41:28.722519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.930 10:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:03.190 [2024-11-18 10:41:28.985775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:03.190 /dev/nbd0 00:13:03.190 10:41:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.190 10:41:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.190 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:03.190 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:03.190 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.190 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.191 1+0 records in 00:13:03.191 1+0 records out 00:13:03.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276022 s, 14.8 MB/s 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:03.191 10:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:09.771 65536+0 records in 00:13:09.771 65536+0 records out 00:13:09.771 33554432 bytes (34 MB, 32 MiB) copied, 5.39673 s, 6.2 MB/s 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.771 [2024-11-18 10:41:34.654224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.771 [2024-11-18 10:41:34.670288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.771 "name": "raid_bdev1", 00:13:09.771 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:09.771 "strip_size_kb": 0, 00:13:09.771 "state": "online", 00:13:09.771 "raid_level": "raid1", 00:13:09.771 "superblock": false, 00:13:09.771 "num_base_bdevs": 4, 00:13:09.771 "num_base_bdevs_discovered": 3, 00:13:09.771 "num_base_bdevs_operational": 3, 00:13:09.771 "base_bdevs_list": [ 00:13:09.771 { 00:13:09.771 "name": null, 00:13:09.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.771 "is_configured": false, 00:13:09.771 "data_offset": 0, 00:13:09.771 "data_size": 65536 00:13:09.771 }, 00:13:09.771 { 00:13:09.771 "name": "BaseBdev2", 00:13:09.771 "uuid": "11d7caf4-8fe8-57ae-9ff5-ecf413bd91db", 00:13:09.771 "is_configured": true, 00:13:09.771 "data_offset": 0, 00:13:09.771 "data_size": 65536 00:13:09.771 }, 00:13:09.771 { 00:13:09.771 "name": "BaseBdev3", 00:13:09.771 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:09.771 "is_configured": true, 00:13:09.771 "data_offset": 0, 00:13:09.771 "data_size": 65536 00:13:09.771 }, 00:13:09.771 { 00:13:09.771 "name": "BaseBdev4", 00:13:09.771 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:09.771 "is_configured": true, 00:13:09.771 "data_offset": 0, 00:13:09.771 "data_size": 65536 00:13:09.771 } 00:13:09.771 ] 00:13:09.771 }' 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.771 10:41:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.771 10:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.771 10:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.771 10:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.771 [2024-11-18 10:41:35.077568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.771 [2024-11-18 10:41:35.092314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:09.771 10:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.771 10:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.771 [2024-11-18 10:41:35.094074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.342 "name": "raid_bdev1", 00:13:10.342 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:10.342 "strip_size_kb": 0, 00:13:10.342 "state": "online", 00:13:10.342 "raid_level": "raid1", 00:13:10.342 "superblock": false, 00:13:10.342 "num_base_bdevs": 4, 00:13:10.342 "num_base_bdevs_discovered": 4, 00:13:10.342 "num_base_bdevs_operational": 4, 00:13:10.342 "process": { 00:13:10.342 "type": "rebuild", 00:13:10.342 "target": "spare", 00:13:10.342 "progress": { 00:13:10.342 "blocks": 20480, 00:13:10.342 "percent": 31 00:13:10.342 } 00:13:10.342 }, 00:13:10.342 "base_bdevs_list": [ 00:13:10.342 { 00:13:10.342 "name": "spare", 00:13:10.342 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:10.342 "is_configured": true, 00:13:10.342 "data_offset": 0, 00:13:10.342 "data_size": 65536 00:13:10.342 }, 00:13:10.342 { 00:13:10.342 "name": "BaseBdev2", 00:13:10.342 "uuid": "11d7caf4-8fe8-57ae-9ff5-ecf413bd91db", 00:13:10.342 "is_configured": true, 00:13:10.342 "data_offset": 0, 00:13:10.342 "data_size": 65536 00:13:10.342 }, 00:13:10.342 { 00:13:10.342 "name": "BaseBdev3", 00:13:10.342 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:10.342 "is_configured": true, 00:13:10.342 "data_offset": 0, 00:13:10.342 "data_size": 65536 00:13:10.342 }, 00:13:10.342 { 00:13:10.342 "name": "BaseBdev4", 00:13:10.342 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:10.342 "is_configured": true, 00:13:10.342 "data_offset": 0, 00:13:10.342 "data_size": 65536 00:13:10.342 } 00:13:10.342 ] 00:13:10.342 }' 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.342 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.602 [2024-11-18 10:41:36.261570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.602 [2024-11-18 10:41:36.298501] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.602 [2024-11-18 10:41:36.298556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.602 [2024-11-18 10:41:36.298571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.602 [2024-11-18 10:41:36.298580] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.602 "name": "raid_bdev1", 00:13:10.602 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:10.602 "strip_size_kb": 0, 00:13:10.602 "state": "online", 00:13:10.602 "raid_level": "raid1", 00:13:10.602 "superblock": false, 00:13:10.602 "num_base_bdevs": 4, 00:13:10.602 "num_base_bdevs_discovered": 3, 00:13:10.602 "num_base_bdevs_operational": 3, 00:13:10.602 "base_bdevs_list": [ 00:13:10.602 { 00:13:10.602 "name": null, 00:13:10.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.602 "is_configured": false, 00:13:10.602 "data_offset": 0, 00:13:10.602 "data_size": 65536 00:13:10.602 }, 00:13:10.602 { 00:13:10.602 "name": "BaseBdev2", 00:13:10.602 "uuid": "11d7caf4-8fe8-57ae-9ff5-ecf413bd91db", 00:13:10.602 "is_configured": true, 00:13:10.602 "data_offset": 0, 00:13:10.602 "data_size": 65536 00:13:10.602 }, 00:13:10.602 { 00:13:10.602 "name": "BaseBdev3", 00:13:10.602 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:10.602 "is_configured": true, 00:13:10.602 "data_offset": 0, 00:13:10.602 "data_size": 65536 00:13:10.602 }, 00:13:10.602 { 00:13:10.602 "name": "BaseBdev4", 00:13:10.602 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:10.602 "is_configured": true, 00:13:10.602 "data_offset": 0, 00:13:10.602 "data_size": 65536 00:13:10.602 } 00:13:10.602 ] 00:13:10.602 }' 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.602 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.173 "name": "raid_bdev1", 00:13:11.173 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:11.173 "strip_size_kb": 0, 00:13:11.173 "state": "online", 00:13:11.173 "raid_level": "raid1", 00:13:11.173 "superblock": false, 00:13:11.173 "num_base_bdevs": 4, 00:13:11.173 "num_base_bdevs_discovered": 3, 00:13:11.173 "num_base_bdevs_operational": 3, 00:13:11.173 "base_bdevs_list": [ 00:13:11.173 { 00:13:11.173 "name": null, 00:13:11.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.173 "is_configured": false, 00:13:11.173 "data_offset": 0, 00:13:11.173 "data_size": 65536 00:13:11.173 }, 00:13:11.173 { 00:13:11.173 "name": "BaseBdev2", 00:13:11.173 "uuid": "11d7caf4-8fe8-57ae-9ff5-ecf413bd91db", 00:13:11.173 "is_configured": true, 00:13:11.173 "data_offset": 0, 00:13:11.173 "data_size": 65536 00:13:11.173 }, 00:13:11.173 { 00:13:11.173 "name": "BaseBdev3", 00:13:11.173 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:11.173 "is_configured": true, 00:13:11.173 "data_offset": 0, 00:13:11.173 "data_size": 65536 00:13:11.173 }, 00:13:11.173 { 00:13:11.173 "name": "BaseBdev4", 00:13:11.173 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:11.173 "is_configured": true, 00:13:11.173 "data_offset": 0, 00:13:11.173 "data_size": 65536 00:13:11.173 } 00:13:11.173 ] 00:13:11.173 }' 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.173 [2024-11-18 10:41:36.913199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.173 [2024-11-18 10:41:36.926704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.173 10:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:11.173 [2024-11-18 10:41:36.928468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.113 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.113 "name": "raid_bdev1", 00:13:12.113 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:12.113 "strip_size_kb": 0, 00:13:12.113 "state": "online", 00:13:12.113 "raid_level": "raid1", 00:13:12.113 "superblock": false, 00:13:12.113 "num_base_bdevs": 4, 00:13:12.113 "num_base_bdevs_discovered": 4, 00:13:12.113 "num_base_bdevs_operational": 4, 00:13:12.113 "process": { 00:13:12.113 "type": "rebuild", 00:13:12.113 "target": "spare", 00:13:12.113 "progress": { 00:13:12.113 "blocks": 20480, 00:13:12.113 "percent": 31 00:13:12.113 } 00:13:12.114 }, 00:13:12.114 "base_bdevs_list": [ 00:13:12.114 { 00:13:12.114 "name": "spare", 00:13:12.114 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:12.114 "is_configured": true, 00:13:12.114 "data_offset": 0, 00:13:12.114 "data_size": 65536 00:13:12.114 }, 00:13:12.114 { 00:13:12.114 "name": "BaseBdev2", 00:13:12.114 "uuid": "11d7caf4-8fe8-57ae-9ff5-ecf413bd91db", 00:13:12.114 "is_configured": true, 00:13:12.114 "data_offset": 0, 00:13:12.114 "data_size": 65536 00:13:12.114 }, 00:13:12.114 { 00:13:12.114 "name": "BaseBdev3", 00:13:12.114 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:12.114 "is_configured": true, 00:13:12.114 "data_offset": 0, 00:13:12.114 "data_size": 65536 00:13:12.114 }, 00:13:12.114 { 00:13:12.114 "name": "BaseBdev4", 00:13:12.114 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:12.114 "is_configured": true, 00:13:12.114 "data_offset": 0, 00:13:12.114 "data_size": 65536 00:13:12.114 } 00:13:12.114 ] 00:13:12.114 }' 00:13:12.114 10:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.374 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.374 [2024-11-18 10:41:38.088372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.374 [2024-11-18 10:41:38.132751] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.375 "name": "raid_bdev1", 00:13:12.375 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:12.375 "strip_size_kb": 0, 00:13:12.375 "state": "online", 00:13:12.375 "raid_level": "raid1", 00:13:12.375 "superblock": false, 00:13:12.375 "num_base_bdevs": 4, 00:13:12.375 "num_base_bdevs_discovered": 3, 00:13:12.375 "num_base_bdevs_operational": 3, 00:13:12.375 "process": { 00:13:12.375 "type": "rebuild", 00:13:12.375 "target": "spare", 00:13:12.375 "progress": { 00:13:12.375 "blocks": 24576, 00:13:12.375 "percent": 37 00:13:12.375 } 00:13:12.375 }, 00:13:12.375 "base_bdevs_list": [ 00:13:12.375 { 00:13:12.375 "name": "spare", 00:13:12.375 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:12.375 "is_configured": true, 00:13:12.375 "data_offset": 0, 00:13:12.375 "data_size": 65536 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "name": null, 00:13:12.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.375 "is_configured": false, 00:13:12.375 "data_offset": 0, 00:13:12.375 "data_size": 65536 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "name": "BaseBdev3", 00:13:12.375 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:12.375 "is_configured": true, 00:13:12.375 "data_offset": 0, 00:13:12.375 "data_size": 65536 00:13:12.375 }, 00:13:12.375 { 00:13:12.375 "name": "BaseBdev4", 00:13:12.375 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:12.375 "is_configured": true, 00:13:12.375 "data_offset": 0, 00:13:12.375 "data_size": 65536 00:13:12.375 } 00:13:12.375 ] 00:13:12.375 }' 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.375 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.648 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.648 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=440 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.649 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.649 "name": "raid_bdev1", 00:13:12.649 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:12.649 "strip_size_kb": 0, 00:13:12.649 "state": "online", 00:13:12.649 "raid_level": "raid1", 00:13:12.649 "superblock": false, 00:13:12.649 "num_base_bdevs": 4, 00:13:12.649 "num_base_bdevs_discovered": 3, 00:13:12.649 "num_base_bdevs_operational": 3, 00:13:12.649 "process": { 00:13:12.649 "type": "rebuild", 00:13:12.649 "target": "spare", 00:13:12.649 "progress": { 00:13:12.649 "blocks": 26624, 00:13:12.649 "percent": 40 00:13:12.649 } 00:13:12.649 }, 00:13:12.649 "base_bdevs_list": [ 00:13:12.649 { 00:13:12.649 "name": "spare", 00:13:12.649 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:12.649 "is_configured": true, 00:13:12.649 "data_offset": 0, 00:13:12.649 "data_size": 65536 00:13:12.649 }, 00:13:12.649 { 00:13:12.649 "name": null, 00:13:12.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.649 "is_configured": false, 00:13:12.649 "data_offset": 0, 00:13:12.649 "data_size": 65536 00:13:12.649 }, 00:13:12.649 { 00:13:12.649 "name": "BaseBdev3", 00:13:12.649 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:12.649 "is_configured": true, 00:13:12.649 "data_offset": 0, 00:13:12.649 "data_size": 65536 00:13:12.649 }, 00:13:12.649 { 00:13:12.649 "name": "BaseBdev4", 00:13:12.649 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:12.649 "is_configured": true, 00:13:12.649 "data_offset": 0, 00:13:12.649 "data_size": 65536 00:13:12.650 } 00:13:12.650 ] 00:13:12.650 }' 00:13:12.650 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.650 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.650 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.650 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.650 10:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.597 "name": "raid_bdev1", 00:13:13.597 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:13.597 "strip_size_kb": 0, 00:13:13.597 "state": "online", 00:13:13.597 "raid_level": "raid1", 00:13:13.597 "superblock": false, 00:13:13.597 "num_base_bdevs": 4, 00:13:13.597 "num_base_bdevs_discovered": 3, 00:13:13.597 "num_base_bdevs_operational": 3, 00:13:13.597 "process": { 00:13:13.597 "type": "rebuild", 00:13:13.597 "target": "spare", 00:13:13.597 "progress": { 00:13:13.597 "blocks": 49152, 00:13:13.597 "percent": 75 00:13:13.597 } 00:13:13.597 }, 00:13:13.597 "base_bdevs_list": [ 00:13:13.597 { 00:13:13.597 "name": "spare", 00:13:13.597 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:13.597 "is_configured": true, 00:13:13.597 "data_offset": 0, 00:13:13.597 "data_size": 65536 00:13:13.597 }, 00:13:13.597 { 00:13:13.597 "name": null, 00:13:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.597 "is_configured": false, 00:13:13.597 "data_offset": 0, 00:13:13.597 "data_size": 65536 00:13:13.597 }, 00:13:13.597 { 00:13:13.597 "name": "BaseBdev3", 00:13:13.597 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:13.597 "is_configured": true, 00:13:13.597 "data_offset": 0, 00:13:13.597 "data_size": 65536 00:13:13.597 }, 00:13:13.597 { 00:13:13.597 "name": "BaseBdev4", 00:13:13.597 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:13.597 "is_configured": true, 00:13:13.597 "data_offset": 0, 00:13:13.597 "data_size": 65536 00:13:13.597 } 00:13:13.597 ] 00:13:13.597 }' 00:13:13.597 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.857 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.857 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.857 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.857 10:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.428 [2024-11-18 10:41:40.139938] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:14.428 [2024-11-18 10:41:40.140058] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:14.428 [2024-11-18 10:41:40.140118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.688 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.947 "name": "raid_bdev1", 00:13:14.947 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:14.947 "strip_size_kb": 0, 00:13:14.947 "state": "online", 00:13:14.947 "raid_level": "raid1", 00:13:14.947 "superblock": false, 00:13:14.947 "num_base_bdevs": 4, 00:13:14.947 "num_base_bdevs_discovered": 3, 00:13:14.947 "num_base_bdevs_operational": 3, 00:13:14.947 "base_bdevs_list": [ 00:13:14.947 { 00:13:14.947 "name": "spare", 00:13:14.947 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:14.947 "is_configured": true, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 }, 00:13:14.947 { 00:13:14.947 "name": null, 00:13:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.947 "is_configured": false, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 }, 00:13:14.947 { 00:13:14.947 "name": "BaseBdev3", 00:13:14.947 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:14.947 "is_configured": true, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 }, 00:13:14.947 { 00:13:14.947 "name": "BaseBdev4", 00:13:14.947 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:14.947 "is_configured": true, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 } 00:13:14.947 ] 00:13:14.947 }' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.947 "name": "raid_bdev1", 00:13:14.947 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:14.947 "strip_size_kb": 0, 00:13:14.947 "state": "online", 00:13:14.947 "raid_level": "raid1", 00:13:14.947 "superblock": false, 00:13:14.947 "num_base_bdevs": 4, 00:13:14.947 "num_base_bdevs_discovered": 3, 00:13:14.947 "num_base_bdevs_operational": 3, 00:13:14.947 "base_bdevs_list": [ 00:13:14.947 { 00:13:14.947 "name": "spare", 00:13:14.947 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:14.947 "is_configured": true, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 }, 00:13:14.947 { 00:13:14.947 "name": null, 00:13:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.947 "is_configured": false, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 }, 00:13:14.947 { 00:13:14.947 "name": "BaseBdev3", 00:13:14.947 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:14.947 "is_configured": true, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 }, 00:13:14.947 { 00:13:14.947 "name": "BaseBdev4", 00:13:14.947 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:14.947 "is_configured": true, 00:13:14.947 "data_offset": 0, 00:13:14.947 "data_size": 65536 00:13:14.947 } 00:13:14.947 ] 00:13:14.947 }' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.947 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.207 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.207 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.207 "name": "raid_bdev1", 00:13:15.207 "uuid": "f2a51fe7-e4f4-4ace-bec7-276cba3d57e1", 00:13:15.207 "strip_size_kb": 0, 00:13:15.207 "state": "online", 00:13:15.207 "raid_level": "raid1", 00:13:15.207 "superblock": false, 00:13:15.207 "num_base_bdevs": 4, 00:13:15.207 "num_base_bdevs_discovered": 3, 00:13:15.207 "num_base_bdevs_operational": 3, 00:13:15.207 "base_bdevs_list": [ 00:13:15.207 { 00:13:15.207 "name": "spare", 00:13:15.207 "uuid": "21d8e394-8eaa-50ec-8bca-9596ef254ae2", 00:13:15.207 "is_configured": true, 00:13:15.207 "data_offset": 0, 00:13:15.207 "data_size": 65536 00:13:15.207 }, 00:13:15.207 { 00:13:15.207 "name": null, 00:13:15.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.207 "is_configured": false, 00:13:15.207 "data_offset": 0, 00:13:15.207 "data_size": 65536 00:13:15.207 }, 00:13:15.207 { 00:13:15.207 "name": "BaseBdev3", 00:13:15.207 "uuid": "69101b68-9aab-543a-bfd7-ede6cd5a191b", 00:13:15.207 "is_configured": true, 00:13:15.207 "data_offset": 0, 00:13:15.207 "data_size": 65536 00:13:15.207 }, 00:13:15.207 { 00:13:15.207 "name": "BaseBdev4", 00:13:15.207 "uuid": "d875017b-1917-5bcd-ae6f-5b0d72d699a1", 00:13:15.207 "is_configured": true, 00:13:15.207 "data_offset": 0, 00:13:15.207 "data_size": 65536 00:13:15.207 } 00:13:15.207 ] 00:13:15.207 }' 00:13:15.207 10:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.207 10:41:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.468 [2024-11-18 10:41:41.277972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.468 [2024-11-18 10:41:41.278041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.468 [2024-11-18 10:41:41.278129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.468 [2024-11-18 10:41:41.278229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.468 [2024-11-18 10:41:41.278294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:15.468 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.469 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.469 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:15.729 /dev/nbd0 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.729 1+0 records in 00:13:15.729 1+0 records out 00:13:15.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327931 s, 12.5 MB/s 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.729 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:15.989 /dev/nbd1 00:13:15.989 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:15.989 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.989 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:15.989 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.990 1+0 records in 00:13:15.990 1+0 records out 00:13:15.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530052 s, 7.7 MB/s 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.990 10:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.249 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.508 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77361 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77361 ']' 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77361 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77361 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77361' 00:13:16.769 killing process with pid 77361 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77361 00:13:16.769 Received shutdown signal, test time was about 60.000000 seconds 00:13:16.769 00:13:16.769 Latency(us) 00:13:16.769 [2024-11-18T10:41:42.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.769 [2024-11-18T10:41:42.654Z] =================================================================================================================== 00:13:16.769 [2024-11-18T10:41:42.654Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:16.769 [2024-11-18 10:41:42.494476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.769 10:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77361 00:13:17.340 [2024-11-18 10:41:43.004590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.281 10:41:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:18.281 00:13:18.281 real 0m17.162s 00:13:18.281 user 0m18.982s 00:13:18.281 sys 0m3.127s 00:13:18.281 10:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.281 10:41:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.281 ************************************ 00:13:18.281 END TEST raid_rebuild_test 00:13:18.281 ************************************ 00:13:18.542 10:41:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:18.542 10:41:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:18.542 10:41:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.542 10:41:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 ************************************ 00:13:18.542 START TEST raid_rebuild_test_sb 00:13:18.542 ************************************ 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77802 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77802 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77802 ']' 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.542 10:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 [2024-11-18 10:41:44.346035] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:18.542 [2024-11-18 10:41:44.346233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.542 Zero copy mechanism will not be used. 00:13:18.542 -allocations --file-prefix=spdk_pid77802 ] 00:13:18.803 [2024-11-18 10:41:44.523335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.803 [2024-11-18 10:41:44.655284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.063 [2024-11-18 10:41:44.893437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.063 [2024-11-18 10:41:44.893566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.323 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.323 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.324 BaseBdev1_malloc 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.324 [2024-11-18 10:41:45.183998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:19.324 [2024-11-18 10:41:45.184176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.324 [2024-11-18 10:41:45.184216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.324 [2024-11-18 10:41:45.184229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.324 [2024-11-18 10:41:45.186573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.324 [2024-11-18 10:41:45.186611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.324 BaseBdev1 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.324 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 BaseBdev2_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 [2024-11-18 10:41:45.245128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:19.585 [2024-11-18 10:41:45.245196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.585 [2024-11-18 10:41:45.245216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.585 [2024-11-18 10:41:45.245230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.585 [2024-11-18 10:41:45.247547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.585 [2024-11-18 10:41:45.247583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.585 BaseBdev2 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 BaseBdev3_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 [2024-11-18 10:41:45.337260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:19.585 [2024-11-18 10:41:45.337313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.585 [2024-11-18 10:41:45.337336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:19.585 [2024-11-18 10:41:45.337348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.585 [2024-11-18 10:41:45.339705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.585 [2024-11-18 10:41:45.339836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:19.585 BaseBdev3 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 BaseBdev4_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 [2024-11-18 10:41:45.397536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:19.585 [2024-11-18 10:41:45.397587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.585 [2024-11-18 10:41:45.397606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:19.585 [2024-11-18 10:41:45.397617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.585 [2024-11-18 10:41:45.399945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.585 [2024-11-18 10:41:45.399985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:19.585 BaseBdev4 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 spare_malloc 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 spare_delay 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.585 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.845 [2024-11-18 10:41:45.471877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.845 [2024-11-18 10:41:45.471936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.845 [2024-11-18 10:41:45.471955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:19.845 [2024-11-18 10:41:45.471966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.845 [2024-11-18 10:41:45.474285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.845 [2024-11-18 10:41:45.474321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.845 spare 00:13:19.845 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.846 [2024-11-18 10:41:45.483917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.846 [2024-11-18 10:41:45.485930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.846 [2024-11-18 10:41:45.486087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.846 [2024-11-18 10:41:45.486159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.846 [2024-11-18 10:41:45.486357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:19.846 [2024-11-18 10:41:45.486375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:19.846 [2024-11-18 10:41:45.486614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:19.846 [2024-11-18 10:41:45.486799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:19.846 [2024-11-18 10:41:45.486809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:19.846 [2024-11-18 10:41:45.486981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.846 "name": "raid_bdev1", 00:13:19.846 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:19.846 "strip_size_kb": 0, 00:13:19.846 "state": "online", 00:13:19.846 "raid_level": "raid1", 00:13:19.846 "superblock": true, 00:13:19.846 "num_base_bdevs": 4, 00:13:19.846 "num_base_bdevs_discovered": 4, 00:13:19.846 "num_base_bdevs_operational": 4, 00:13:19.846 "base_bdevs_list": [ 00:13:19.846 { 00:13:19.846 "name": "BaseBdev1", 00:13:19.846 "uuid": "28129b9a-b09b-5032-a3cd-6900d4396a2e", 00:13:19.846 "is_configured": true, 00:13:19.846 "data_offset": 2048, 00:13:19.846 "data_size": 63488 00:13:19.846 }, 00:13:19.846 { 00:13:19.846 "name": "BaseBdev2", 00:13:19.846 "uuid": "33640aa2-cb6d-5617-ae00-b7ff76396209", 00:13:19.846 "is_configured": true, 00:13:19.846 "data_offset": 2048, 00:13:19.846 "data_size": 63488 00:13:19.846 }, 00:13:19.846 { 00:13:19.846 "name": "BaseBdev3", 00:13:19.846 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:19.846 "is_configured": true, 00:13:19.846 "data_offset": 2048, 00:13:19.846 "data_size": 63488 00:13:19.846 }, 00:13:19.846 { 00:13:19.846 "name": "BaseBdev4", 00:13:19.846 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:19.846 "is_configured": true, 00:13:19.846 "data_offset": 2048, 00:13:19.846 "data_size": 63488 00:13:19.846 } 00:13:19.846 ] 00:13:19.846 }' 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.846 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.106 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.106 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.106 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.106 [2024-11-18 10:41:45.955425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.106 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.366 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:20.366 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.366 10:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:20.366 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.366 10:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.366 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:20.366 [2024-11-18 10:41:46.214777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:20.366 /dev/nbd0 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.627 1+0 records in 00:13:20.627 1+0 records out 00:13:20.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361438 s, 11.3 MB/s 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:20.627 10:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:25.905 63488+0 records in 00:13:25.905 63488+0 records out 00:13:25.905 32505856 bytes (33 MB, 31 MiB) copied, 5.24011 s, 6.2 MB/s 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.905 [2024-11-18 10:41:51.749758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.905 [2024-11-18 10:41:51.761826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.905 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.165 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.165 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.165 "name": "raid_bdev1", 00:13:26.165 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:26.165 "strip_size_kb": 0, 00:13:26.165 "state": "online", 00:13:26.165 "raid_level": "raid1", 00:13:26.165 "superblock": true, 00:13:26.165 "num_base_bdevs": 4, 00:13:26.165 "num_base_bdevs_discovered": 3, 00:13:26.165 "num_base_bdevs_operational": 3, 00:13:26.165 "base_bdevs_list": [ 00:13:26.165 { 00:13:26.165 "name": null, 00:13:26.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.165 "is_configured": false, 00:13:26.165 "data_offset": 0, 00:13:26.165 "data_size": 63488 00:13:26.165 }, 00:13:26.165 { 00:13:26.165 "name": "BaseBdev2", 00:13:26.165 "uuid": "33640aa2-cb6d-5617-ae00-b7ff76396209", 00:13:26.165 "is_configured": true, 00:13:26.165 "data_offset": 2048, 00:13:26.165 "data_size": 63488 00:13:26.165 }, 00:13:26.165 { 00:13:26.165 "name": "BaseBdev3", 00:13:26.165 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:26.165 "is_configured": true, 00:13:26.165 "data_offset": 2048, 00:13:26.165 "data_size": 63488 00:13:26.165 }, 00:13:26.165 { 00:13:26.165 "name": "BaseBdev4", 00:13:26.165 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:26.165 "is_configured": true, 00:13:26.165 "data_offset": 2048, 00:13:26.165 "data_size": 63488 00:13:26.165 } 00:13:26.165 ] 00:13:26.165 }' 00:13:26.165 10:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.165 10:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.425 10:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.425 10:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.425 10:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.425 [2024-11-18 10:41:52.228983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.425 [2024-11-18 10:41:52.244750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:26.425 10:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.425 10:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:26.425 [2024-11-18 10:41:52.246537] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.808 "name": "raid_bdev1", 00:13:27.808 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:27.808 "strip_size_kb": 0, 00:13:27.808 "state": "online", 00:13:27.808 "raid_level": "raid1", 00:13:27.808 "superblock": true, 00:13:27.808 "num_base_bdevs": 4, 00:13:27.808 "num_base_bdevs_discovered": 4, 00:13:27.808 "num_base_bdevs_operational": 4, 00:13:27.808 "process": { 00:13:27.808 "type": "rebuild", 00:13:27.808 "target": "spare", 00:13:27.808 "progress": { 00:13:27.808 "blocks": 20480, 00:13:27.808 "percent": 32 00:13:27.808 } 00:13:27.808 }, 00:13:27.808 "base_bdevs_list": [ 00:13:27.808 { 00:13:27.808 "name": "spare", 00:13:27.808 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:27.808 "is_configured": true, 00:13:27.808 "data_offset": 2048, 00:13:27.808 "data_size": 63488 00:13:27.808 }, 00:13:27.808 { 00:13:27.808 "name": "BaseBdev2", 00:13:27.808 "uuid": "33640aa2-cb6d-5617-ae00-b7ff76396209", 00:13:27.808 "is_configured": true, 00:13:27.808 "data_offset": 2048, 00:13:27.808 "data_size": 63488 00:13:27.808 }, 00:13:27.808 { 00:13:27.808 "name": "BaseBdev3", 00:13:27.808 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:27.808 "is_configured": true, 00:13:27.808 "data_offset": 2048, 00:13:27.808 "data_size": 63488 00:13:27.808 }, 00:13:27.808 { 00:13:27.808 "name": "BaseBdev4", 00:13:27.808 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:27.808 "is_configured": true, 00:13:27.808 "data_offset": 2048, 00:13:27.808 "data_size": 63488 00:13:27.808 } 00:13:27.808 ] 00:13:27.808 }' 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.808 [2024-11-18 10:41:53.413624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.808 [2024-11-18 10:41:53.451105] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.808 [2024-11-18 10:41:53.451233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.808 [2024-11-18 10:41:53.451273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.808 [2024-11-18 10:41:53.451287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.808 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.809 "name": "raid_bdev1", 00:13:27.809 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:27.809 "strip_size_kb": 0, 00:13:27.809 "state": "online", 00:13:27.809 "raid_level": "raid1", 00:13:27.809 "superblock": true, 00:13:27.809 "num_base_bdevs": 4, 00:13:27.809 "num_base_bdevs_discovered": 3, 00:13:27.809 "num_base_bdevs_operational": 3, 00:13:27.809 "base_bdevs_list": [ 00:13:27.809 { 00:13:27.809 "name": null, 00:13:27.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.809 "is_configured": false, 00:13:27.809 "data_offset": 0, 00:13:27.809 "data_size": 63488 00:13:27.809 }, 00:13:27.809 { 00:13:27.809 "name": "BaseBdev2", 00:13:27.809 "uuid": "33640aa2-cb6d-5617-ae00-b7ff76396209", 00:13:27.809 "is_configured": true, 00:13:27.809 "data_offset": 2048, 00:13:27.809 "data_size": 63488 00:13:27.809 }, 00:13:27.809 { 00:13:27.809 "name": "BaseBdev3", 00:13:27.809 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:27.809 "is_configured": true, 00:13:27.809 "data_offset": 2048, 00:13:27.809 "data_size": 63488 00:13:27.809 }, 00:13:27.809 { 00:13:27.809 "name": "BaseBdev4", 00:13:27.809 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:27.809 "is_configured": true, 00:13:27.809 "data_offset": 2048, 00:13:27.809 "data_size": 63488 00:13:27.809 } 00:13:27.809 ] 00:13:27.809 }' 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.809 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.068 "name": "raid_bdev1", 00:13:28.068 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:28.068 "strip_size_kb": 0, 00:13:28.068 "state": "online", 00:13:28.068 "raid_level": "raid1", 00:13:28.068 "superblock": true, 00:13:28.068 "num_base_bdevs": 4, 00:13:28.068 "num_base_bdevs_discovered": 3, 00:13:28.068 "num_base_bdevs_operational": 3, 00:13:28.068 "base_bdevs_list": [ 00:13:28.068 { 00:13:28.068 "name": null, 00:13:28.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.068 "is_configured": false, 00:13:28.068 "data_offset": 0, 00:13:28.068 "data_size": 63488 00:13:28.068 }, 00:13:28.068 { 00:13:28.068 "name": "BaseBdev2", 00:13:28.068 "uuid": "33640aa2-cb6d-5617-ae00-b7ff76396209", 00:13:28.068 "is_configured": true, 00:13:28.068 "data_offset": 2048, 00:13:28.068 "data_size": 63488 00:13:28.068 }, 00:13:28.068 { 00:13:28.068 "name": "BaseBdev3", 00:13:28.068 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:28.068 "is_configured": true, 00:13:28.068 "data_offset": 2048, 00:13:28.068 "data_size": 63488 00:13:28.068 }, 00:13:28.068 { 00:13:28.068 "name": "BaseBdev4", 00:13:28.068 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:28.068 "is_configured": true, 00:13:28.068 "data_offset": 2048, 00:13:28.068 "data_size": 63488 00:13:28.068 } 00:13:28.068 ] 00:13:28.068 }' 00:13:28.068 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.327 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.327 10:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.327 10:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.327 10:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.327 10:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.327 10:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.327 [2024-11-18 10:41:54.051069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.327 [2024-11-18 10:41:54.065200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:28.327 10:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.327 10:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:28.327 [2024-11-18 10:41:54.067066] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.266 "name": "raid_bdev1", 00:13:29.266 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:29.266 "strip_size_kb": 0, 00:13:29.266 "state": "online", 00:13:29.266 "raid_level": "raid1", 00:13:29.266 "superblock": true, 00:13:29.266 "num_base_bdevs": 4, 00:13:29.266 "num_base_bdevs_discovered": 4, 00:13:29.266 "num_base_bdevs_operational": 4, 00:13:29.266 "process": { 00:13:29.266 "type": "rebuild", 00:13:29.266 "target": "spare", 00:13:29.266 "progress": { 00:13:29.266 "blocks": 20480, 00:13:29.266 "percent": 32 00:13:29.266 } 00:13:29.266 }, 00:13:29.266 "base_bdevs_list": [ 00:13:29.266 { 00:13:29.266 "name": "spare", 00:13:29.266 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:29.266 "is_configured": true, 00:13:29.266 "data_offset": 2048, 00:13:29.266 "data_size": 63488 00:13:29.266 }, 00:13:29.266 { 00:13:29.266 "name": "BaseBdev2", 00:13:29.266 "uuid": "33640aa2-cb6d-5617-ae00-b7ff76396209", 00:13:29.266 "is_configured": true, 00:13:29.266 "data_offset": 2048, 00:13:29.266 "data_size": 63488 00:13:29.266 }, 00:13:29.266 { 00:13:29.266 "name": "BaseBdev3", 00:13:29.266 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:29.266 "is_configured": true, 00:13:29.266 "data_offset": 2048, 00:13:29.266 "data_size": 63488 00:13:29.266 }, 00:13:29.266 { 00:13:29.266 "name": "BaseBdev4", 00:13:29.266 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:29.266 "is_configured": true, 00:13:29.266 "data_offset": 2048, 00:13:29.266 "data_size": 63488 00:13:29.266 } 00:13:29.266 ] 00:13:29.266 }' 00:13:29.266 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:29.526 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.526 [2024-11-18 10:41:55.230562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.526 [2024-11-18 10:41:55.371397] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.526 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.787 "name": "raid_bdev1", 00:13:29.787 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:29.787 "strip_size_kb": 0, 00:13:29.787 "state": "online", 00:13:29.787 "raid_level": "raid1", 00:13:29.787 "superblock": true, 00:13:29.787 "num_base_bdevs": 4, 00:13:29.787 "num_base_bdevs_discovered": 3, 00:13:29.787 "num_base_bdevs_operational": 3, 00:13:29.787 "process": { 00:13:29.787 "type": "rebuild", 00:13:29.787 "target": "spare", 00:13:29.787 "progress": { 00:13:29.787 "blocks": 24576, 00:13:29.787 "percent": 38 00:13:29.787 } 00:13:29.787 }, 00:13:29.787 "base_bdevs_list": [ 00:13:29.787 { 00:13:29.787 "name": "spare", 00:13:29.787 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:29.787 "is_configured": true, 00:13:29.787 "data_offset": 2048, 00:13:29.787 "data_size": 63488 00:13:29.787 }, 00:13:29.787 { 00:13:29.787 "name": null, 00:13:29.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.787 "is_configured": false, 00:13:29.787 "data_offset": 0, 00:13:29.787 "data_size": 63488 00:13:29.787 }, 00:13:29.787 { 00:13:29.787 "name": "BaseBdev3", 00:13:29.787 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:29.787 "is_configured": true, 00:13:29.787 "data_offset": 2048, 00:13:29.787 "data_size": 63488 00:13:29.787 }, 00:13:29.787 { 00:13:29.787 "name": "BaseBdev4", 00:13:29.787 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:29.787 "is_configured": true, 00:13:29.787 "data_offset": 2048, 00:13:29.787 "data_size": 63488 00:13:29.787 } 00:13:29.787 ] 00:13:29.787 }' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.787 "name": "raid_bdev1", 00:13:29.787 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:29.787 "strip_size_kb": 0, 00:13:29.787 "state": "online", 00:13:29.787 "raid_level": "raid1", 00:13:29.787 "superblock": true, 00:13:29.787 "num_base_bdevs": 4, 00:13:29.787 "num_base_bdevs_discovered": 3, 00:13:29.787 "num_base_bdevs_operational": 3, 00:13:29.787 "process": { 00:13:29.787 "type": "rebuild", 00:13:29.787 "target": "spare", 00:13:29.787 "progress": { 00:13:29.787 "blocks": 26624, 00:13:29.787 "percent": 41 00:13:29.787 } 00:13:29.787 }, 00:13:29.787 "base_bdevs_list": [ 00:13:29.787 { 00:13:29.787 "name": "spare", 00:13:29.787 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:29.787 "is_configured": true, 00:13:29.787 "data_offset": 2048, 00:13:29.787 "data_size": 63488 00:13:29.787 }, 00:13:29.787 { 00:13:29.787 "name": null, 00:13:29.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.787 "is_configured": false, 00:13:29.787 "data_offset": 0, 00:13:29.787 "data_size": 63488 00:13:29.787 }, 00:13:29.787 { 00:13:29.787 "name": "BaseBdev3", 00:13:29.787 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:29.787 "is_configured": true, 00:13:29.787 "data_offset": 2048, 00:13:29.787 "data_size": 63488 00:13:29.787 }, 00:13:29.787 { 00:13:29.787 "name": "BaseBdev4", 00:13:29.787 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:29.787 "is_configured": true, 00:13:29.787 "data_offset": 2048, 00:13:29.787 "data_size": 63488 00:13:29.787 } 00:13:29.787 ] 00:13:29.787 }' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.787 10:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.168 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.169 "name": "raid_bdev1", 00:13:31.169 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:31.169 "strip_size_kb": 0, 00:13:31.169 "state": "online", 00:13:31.169 "raid_level": "raid1", 00:13:31.169 "superblock": true, 00:13:31.169 "num_base_bdevs": 4, 00:13:31.169 "num_base_bdevs_discovered": 3, 00:13:31.169 "num_base_bdevs_operational": 3, 00:13:31.169 "process": { 00:13:31.169 "type": "rebuild", 00:13:31.169 "target": "spare", 00:13:31.169 "progress": { 00:13:31.169 "blocks": 49152, 00:13:31.169 "percent": 77 00:13:31.169 } 00:13:31.169 }, 00:13:31.169 "base_bdevs_list": [ 00:13:31.169 { 00:13:31.169 "name": "spare", 00:13:31.169 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:31.169 "is_configured": true, 00:13:31.169 "data_offset": 2048, 00:13:31.169 "data_size": 63488 00:13:31.169 }, 00:13:31.169 { 00:13:31.169 "name": null, 00:13:31.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.169 "is_configured": false, 00:13:31.169 "data_offset": 0, 00:13:31.169 "data_size": 63488 00:13:31.169 }, 00:13:31.169 { 00:13:31.169 "name": "BaseBdev3", 00:13:31.169 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:31.169 "is_configured": true, 00:13:31.169 "data_offset": 2048, 00:13:31.169 "data_size": 63488 00:13:31.169 }, 00:13:31.169 { 00:13:31.169 "name": "BaseBdev4", 00:13:31.169 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:31.169 "is_configured": true, 00:13:31.169 "data_offset": 2048, 00:13:31.169 "data_size": 63488 00:13:31.169 } 00:13:31.169 ] 00:13:31.169 }' 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.169 10:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.428 [2024-11-18 10:41:57.278434] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:31.428 [2024-11-18 10:41:57.278539] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:31.428 [2024-11-18 10:41:57.278649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.997 "name": "raid_bdev1", 00:13:31.997 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:31.997 "strip_size_kb": 0, 00:13:31.997 "state": "online", 00:13:31.997 "raid_level": "raid1", 00:13:31.997 "superblock": true, 00:13:31.997 "num_base_bdevs": 4, 00:13:31.997 "num_base_bdevs_discovered": 3, 00:13:31.997 "num_base_bdevs_operational": 3, 00:13:31.997 "base_bdevs_list": [ 00:13:31.997 { 00:13:31.997 "name": "spare", 00:13:31.997 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:31.997 "is_configured": true, 00:13:31.997 "data_offset": 2048, 00:13:31.997 "data_size": 63488 00:13:31.997 }, 00:13:31.997 { 00:13:31.997 "name": null, 00:13:31.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.997 "is_configured": false, 00:13:31.997 "data_offset": 0, 00:13:31.997 "data_size": 63488 00:13:31.997 }, 00:13:31.997 { 00:13:31.997 "name": "BaseBdev3", 00:13:31.997 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:31.997 "is_configured": true, 00:13:31.997 "data_offset": 2048, 00:13:31.997 "data_size": 63488 00:13:31.997 }, 00:13:31.997 { 00:13:31.997 "name": "BaseBdev4", 00:13:31.997 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:31.997 "is_configured": true, 00:13:31.997 "data_offset": 2048, 00:13:31.997 "data_size": 63488 00:13:31.997 } 00:13:31.997 ] 00:13:31.997 }' 00:13:31.997 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.257 10:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.257 "name": "raid_bdev1", 00:13:32.257 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:32.257 "strip_size_kb": 0, 00:13:32.257 "state": "online", 00:13:32.257 "raid_level": "raid1", 00:13:32.257 "superblock": true, 00:13:32.257 "num_base_bdevs": 4, 00:13:32.257 "num_base_bdevs_discovered": 3, 00:13:32.257 "num_base_bdevs_operational": 3, 00:13:32.257 "base_bdevs_list": [ 00:13:32.257 { 00:13:32.257 "name": "spare", 00:13:32.257 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": null, 00:13:32.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.257 "is_configured": false, 00:13:32.257 "data_offset": 0, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev3", 00:13:32.257 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev4", 00:13:32.257 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 } 00:13:32.257 ] 00:13:32.257 }' 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.257 "name": "raid_bdev1", 00:13:32.257 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:32.257 "strip_size_kb": 0, 00:13:32.257 "state": "online", 00:13:32.257 "raid_level": "raid1", 00:13:32.257 "superblock": true, 00:13:32.257 "num_base_bdevs": 4, 00:13:32.257 "num_base_bdevs_discovered": 3, 00:13:32.257 "num_base_bdevs_operational": 3, 00:13:32.257 "base_bdevs_list": [ 00:13:32.257 { 00:13:32.257 "name": "spare", 00:13:32.257 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": null, 00:13:32.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.257 "is_configured": false, 00:13:32.257 "data_offset": 0, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev3", 00:13:32.257 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev4", 00:13:32.257 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 } 00:13:32.257 ] 00:13:32.257 }' 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.257 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.826 [2024-11-18 10:41:58.512637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.826 [2024-11-18 10:41:58.512709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.826 [2024-11-18 10:41:58.512784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.826 [2024-11-18 10:41:58.512857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.826 [2024-11-18 10:41:58.512866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.826 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:33.086 /dev/nbd0 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.086 1+0 records in 00:13:33.086 1+0 records out 00:13:33.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219927 s, 18.6 MB/s 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.086 10:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:33.347 /dev/nbd1 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.347 1+0 records in 00:13:33.347 1+0 records out 00:13:33.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375065 s, 10.9 MB/s 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.347 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.605 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.605 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.605 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.605 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.606 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.606 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.606 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:33.606 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.606 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.606 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.865 [2024-11-18 10:41:59.670177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.865 [2024-11-18 10:41:59.670287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.865 [2024-11-18 10:41:59.670313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:33.865 [2024-11-18 10:41:59.670322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.865 [2024-11-18 10:41:59.672499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.865 [2024-11-18 10:41:59.672537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.865 [2024-11-18 10:41:59.672620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:33.865 [2024-11-18 10:41:59.672672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.865 [2024-11-18 10:41:59.672799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.865 [2024-11-18 10:41:59.672895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.865 spare 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.865 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.125 [2024-11-18 10:41:59.772780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:34.125 [2024-11-18 10:41:59.772852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.125 [2024-11-18 10:41:59.773107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:34.125 [2024-11-18 10:41:59.773323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:34.125 [2024-11-18 10:41:59.773338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:34.125 [2024-11-18 10:41:59.773500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.125 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.126 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.126 "name": "raid_bdev1", 00:13:34.126 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:34.126 "strip_size_kb": 0, 00:13:34.126 "state": "online", 00:13:34.126 "raid_level": "raid1", 00:13:34.126 "superblock": true, 00:13:34.126 "num_base_bdevs": 4, 00:13:34.126 "num_base_bdevs_discovered": 3, 00:13:34.126 "num_base_bdevs_operational": 3, 00:13:34.126 "base_bdevs_list": [ 00:13:34.126 { 00:13:34.126 "name": "spare", 00:13:34.126 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:34.126 "is_configured": true, 00:13:34.126 "data_offset": 2048, 00:13:34.126 "data_size": 63488 00:13:34.126 }, 00:13:34.126 { 00:13:34.126 "name": null, 00:13:34.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.126 "is_configured": false, 00:13:34.126 "data_offset": 2048, 00:13:34.126 "data_size": 63488 00:13:34.126 }, 00:13:34.126 { 00:13:34.126 "name": "BaseBdev3", 00:13:34.126 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:34.126 "is_configured": true, 00:13:34.126 "data_offset": 2048, 00:13:34.126 "data_size": 63488 00:13:34.126 }, 00:13:34.126 { 00:13:34.126 "name": "BaseBdev4", 00:13:34.126 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:34.126 "is_configured": true, 00:13:34.126 "data_offset": 2048, 00:13:34.126 "data_size": 63488 00:13:34.126 } 00:13:34.126 ] 00:13:34.126 }' 00:13:34.126 10:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.126 10:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.385 "name": "raid_bdev1", 00:13:34.385 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:34.385 "strip_size_kb": 0, 00:13:34.385 "state": "online", 00:13:34.385 "raid_level": "raid1", 00:13:34.385 "superblock": true, 00:13:34.385 "num_base_bdevs": 4, 00:13:34.385 "num_base_bdevs_discovered": 3, 00:13:34.385 "num_base_bdevs_operational": 3, 00:13:34.385 "base_bdevs_list": [ 00:13:34.385 { 00:13:34.385 "name": "spare", 00:13:34.385 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:34.385 "is_configured": true, 00:13:34.385 "data_offset": 2048, 00:13:34.385 "data_size": 63488 00:13:34.385 }, 00:13:34.385 { 00:13:34.385 "name": null, 00:13:34.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.385 "is_configured": false, 00:13:34.385 "data_offset": 2048, 00:13:34.385 "data_size": 63488 00:13:34.385 }, 00:13:34.385 { 00:13:34.385 "name": "BaseBdev3", 00:13:34.385 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:34.385 "is_configured": true, 00:13:34.385 "data_offset": 2048, 00:13:34.385 "data_size": 63488 00:13:34.385 }, 00:13:34.385 { 00:13:34.385 "name": "BaseBdev4", 00:13:34.385 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:34.385 "is_configured": true, 00:13:34.385 "data_offset": 2048, 00:13:34.385 "data_size": 63488 00:13:34.385 } 00:13:34.385 ] 00:13:34.385 }' 00:13:34.385 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.645 [2024-11-18 10:42:00.420933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.645 "name": "raid_bdev1", 00:13:34.645 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:34.645 "strip_size_kb": 0, 00:13:34.645 "state": "online", 00:13:34.645 "raid_level": "raid1", 00:13:34.645 "superblock": true, 00:13:34.645 "num_base_bdevs": 4, 00:13:34.645 "num_base_bdevs_discovered": 2, 00:13:34.645 "num_base_bdevs_operational": 2, 00:13:34.645 "base_bdevs_list": [ 00:13:34.645 { 00:13:34.645 "name": null, 00:13:34.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.645 "is_configured": false, 00:13:34.645 "data_offset": 0, 00:13:34.645 "data_size": 63488 00:13:34.645 }, 00:13:34.645 { 00:13:34.645 "name": null, 00:13:34.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.645 "is_configured": false, 00:13:34.645 "data_offset": 2048, 00:13:34.645 "data_size": 63488 00:13:34.645 }, 00:13:34.645 { 00:13:34.645 "name": "BaseBdev3", 00:13:34.645 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:34.645 "is_configured": true, 00:13:34.645 "data_offset": 2048, 00:13:34.645 "data_size": 63488 00:13:34.645 }, 00:13:34.645 { 00:13:34.645 "name": "BaseBdev4", 00:13:34.645 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:34.645 "is_configured": true, 00:13:34.645 "data_offset": 2048, 00:13:34.645 "data_size": 63488 00:13:34.645 } 00:13:34.645 ] 00:13:34.645 }' 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.645 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.217 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.217 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.217 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.217 [2024-11-18 10:42:00.844211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.217 [2024-11-18 10:42:00.844408] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:35.217 [2024-11-18 10:42:00.844465] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:35.217 [2024-11-18 10:42:00.844528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.217 [2024-11-18 10:42:00.858609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:35.217 10:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.217 10:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:35.217 [2024-11-18 10:42:00.860506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.185 "name": "raid_bdev1", 00:13:36.185 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:36.185 "strip_size_kb": 0, 00:13:36.185 "state": "online", 00:13:36.185 "raid_level": "raid1", 00:13:36.185 "superblock": true, 00:13:36.185 "num_base_bdevs": 4, 00:13:36.185 "num_base_bdevs_discovered": 3, 00:13:36.185 "num_base_bdevs_operational": 3, 00:13:36.185 "process": { 00:13:36.185 "type": "rebuild", 00:13:36.185 "target": "spare", 00:13:36.185 "progress": { 00:13:36.185 "blocks": 20480, 00:13:36.185 "percent": 32 00:13:36.185 } 00:13:36.185 }, 00:13:36.185 "base_bdevs_list": [ 00:13:36.185 { 00:13:36.185 "name": "spare", 00:13:36.185 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:36.185 "is_configured": true, 00:13:36.185 "data_offset": 2048, 00:13:36.185 "data_size": 63488 00:13:36.185 }, 00:13:36.185 { 00:13:36.185 "name": null, 00:13:36.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.185 "is_configured": false, 00:13:36.185 "data_offset": 2048, 00:13:36.185 "data_size": 63488 00:13:36.185 }, 00:13:36.185 { 00:13:36.185 "name": "BaseBdev3", 00:13:36.185 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:36.185 "is_configured": true, 00:13:36.185 "data_offset": 2048, 00:13:36.185 "data_size": 63488 00:13:36.185 }, 00:13:36.185 { 00:13:36.185 "name": "BaseBdev4", 00:13:36.185 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:36.185 "is_configured": true, 00:13:36.185 "data_offset": 2048, 00:13:36.185 "data_size": 63488 00:13:36.185 } 00:13:36.185 ] 00:13:36.185 }' 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.185 10:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.185 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.185 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:36.185 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.185 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.185 [2024-11-18 10:42:02.027674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.185 [2024-11-18 10:42:02.065064] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.185 [2024-11-18 10:42:02.065163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.185 [2024-11-18 10:42:02.065210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.185 [2024-11-18 10:42:02.065231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.445 "name": "raid_bdev1", 00:13:36.445 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:36.445 "strip_size_kb": 0, 00:13:36.445 "state": "online", 00:13:36.445 "raid_level": "raid1", 00:13:36.445 "superblock": true, 00:13:36.445 "num_base_bdevs": 4, 00:13:36.445 "num_base_bdevs_discovered": 2, 00:13:36.445 "num_base_bdevs_operational": 2, 00:13:36.445 "base_bdevs_list": [ 00:13:36.445 { 00:13:36.445 "name": null, 00:13:36.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.445 "is_configured": false, 00:13:36.445 "data_offset": 0, 00:13:36.445 "data_size": 63488 00:13:36.445 }, 00:13:36.445 { 00:13:36.445 "name": null, 00:13:36.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.445 "is_configured": false, 00:13:36.445 "data_offset": 2048, 00:13:36.445 "data_size": 63488 00:13:36.445 }, 00:13:36.445 { 00:13:36.445 "name": "BaseBdev3", 00:13:36.445 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:36.445 "is_configured": true, 00:13:36.445 "data_offset": 2048, 00:13:36.445 "data_size": 63488 00:13:36.445 }, 00:13:36.445 { 00:13:36.445 "name": "BaseBdev4", 00:13:36.445 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:36.445 "is_configured": true, 00:13:36.445 "data_offset": 2048, 00:13:36.445 "data_size": 63488 00:13:36.445 } 00:13:36.445 ] 00:13:36.445 }' 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.445 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.705 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:36.705 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.705 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.705 [2024-11-18 10:42:02.484785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:36.705 [2024-11-18 10:42:02.484882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.705 [2024-11-18 10:42:02.484925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:36.705 [2024-11-18 10:42:02.484954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.705 [2024-11-18 10:42:02.485454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.705 [2024-11-18 10:42:02.485510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:36.705 [2024-11-18 10:42:02.485616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:36.705 [2024-11-18 10:42:02.485646] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:36.705 [2024-11-18 10:42:02.485684] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:36.705 [2024-11-18 10:42:02.485723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.705 [2024-11-18 10:42:02.499888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:36.705 spare 00:13:36.705 10:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.705 10:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:36.705 [2024-11-18 10:42:02.501739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.645 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.904 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.904 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.904 "name": "raid_bdev1", 00:13:37.904 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:37.905 "strip_size_kb": 0, 00:13:37.905 "state": "online", 00:13:37.905 "raid_level": "raid1", 00:13:37.905 "superblock": true, 00:13:37.905 "num_base_bdevs": 4, 00:13:37.905 "num_base_bdevs_discovered": 3, 00:13:37.905 "num_base_bdevs_operational": 3, 00:13:37.905 "process": { 00:13:37.905 "type": "rebuild", 00:13:37.905 "target": "spare", 00:13:37.905 "progress": { 00:13:37.905 "blocks": 20480, 00:13:37.905 "percent": 32 00:13:37.905 } 00:13:37.905 }, 00:13:37.905 "base_bdevs_list": [ 00:13:37.905 { 00:13:37.905 "name": "spare", 00:13:37.905 "uuid": "a9d12e65-550c-5cdc-bd0f-35918cb96b6f", 00:13:37.905 "is_configured": true, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 }, 00:13:37.905 { 00:13:37.905 "name": null, 00:13:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.905 "is_configured": false, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 }, 00:13:37.905 { 00:13:37.905 "name": "BaseBdev3", 00:13:37.905 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:37.905 "is_configured": true, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 }, 00:13:37.905 { 00:13:37.905 "name": "BaseBdev4", 00:13:37.905 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:37.905 "is_configured": true, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 } 00:13:37.905 ] 00:13:37.905 }' 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.905 [2024-11-18 10:42:03.665843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.905 [2024-11-18 10:42:03.706299] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.905 [2024-11-18 10:42:03.706406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.905 [2024-11-18 10:42:03.706443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.905 [2024-11-18 10:42:03.706466] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.905 "name": "raid_bdev1", 00:13:37.905 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:37.905 "strip_size_kb": 0, 00:13:37.905 "state": "online", 00:13:37.905 "raid_level": "raid1", 00:13:37.905 "superblock": true, 00:13:37.905 "num_base_bdevs": 4, 00:13:37.905 "num_base_bdevs_discovered": 2, 00:13:37.905 "num_base_bdevs_operational": 2, 00:13:37.905 "base_bdevs_list": [ 00:13:37.905 { 00:13:37.905 "name": null, 00:13:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.905 "is_configured": false, 00:13:37.905 "data_offset": 0, 00:13:37.905 "data_size": 63488 00:13:37.905 }, 00:13:37.905 { 00:13:37.905 "name": null, 00:13:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.905 "is_configured": false, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 }, 00:13:37.905 { 00:13:37.905 "name": "BaseBdev3", 00:13:37.905 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:37.905 "is_configured": true, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 }, 00:13:37.905 { 00:13:37.905 "name": "BaseBdev4", 00:13:37.905 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:37.905 "is_configured": true, 00:13:37.905 "data_offset": 2048, 00:13:37.905 "data_size": 63488 00:13:37.905 } 00:13:37.905 ] 00:13:37.905 }' 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.905 10:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.476 "name": "raid_bdev1", 00:13:38.476 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:38.476 "strip_size_kb": 0, 00:13:38.476 "state": "online", 00:13:38.476 "raid_level": "raid1", 00:13:38.476 "superblock": true, 00:13:38.476 "num_base_bdevs": 4, 00:13:38.476 "num_base_bdevs_discovered": 2, 00:13:38.476 "num_base_bdevs_operational": 2, 00:13:38.476 "base_bdevs_list": [ 00:13:38.476 { 00:13:38.476 "name": null, 00:13:38.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.476 "is_configured": false, 00:13:38.476 "data_offset": 0, 00:13:38.476 "data_size": 63488 00:13:38.476 }, 00:13:38.476 { 00:13:38.476 "name": null, 00:13:38.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.476 "is_configured": false, 00:13:38.476 "data_offset": 2048, 00:13:38.476 "data_size": 63488 00:13:38.476 }, 00:13:38.476 { 00:13:38.476 "name": "BaseBdev3", 00:13:38.476 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:38.476 "is_configured": true, 00:13:38.476 "data_offset": 2048, 00:13:38.476 "data_size": 63488 00:13:38.476 }, 00:13:38.476 { 00:13:38.476 "name": "BaseBdev4", 00:13:38.476 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:38.476 "is_configured": true, 00:13:38.476 "data_offset": 2048, 00:13:38.476 "data_size": 63488 00:13:38.476 } 00:13:38.476 ] 00:13:38.476 }' 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.476 [2024-11-18 10:42:04.322478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:38.476 [2024-11-18 10:42:04.322529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.476 [2024-11-18 10:42:04.322548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:38.476 [2024-11-18 10:42:04.322558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.476 [2024-11-18 10:42:04.323007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.476 [2024-11-18 10:42:04.323029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.476 [2024-11-18 10:42:04.323103] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:38.476 [2024-11-18 10:42:04.323120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:38.476 [2024-11-18 10:42:04.323128] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:38.476 [2024-11-18 10:42:04.323151] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:38.476 BaseBdev1 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.476 10:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.860 "name": "raid_bdev1", 00:13:39.860 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:39.860 "strip_size_kb": 0, 00:13:39.860 "state": "online", 00:13:39.860 "raid_level": "raid1", 00:13:39.860 "superblock": true, 00:13:39.860 "num_base_bdevs": 4, 00:13:39.860 "num_base_bdevs_discovered": 2, 00:13:39.860 "num_base_bdevs_operational": 2, 00:13:39.860 "base_bdevs_list": [ 00:13:39.860 { 00:13:39.860 "name": null, 00:13:39.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.860 "is_configured": false, 00:13:39.860 "data_offset": 0, 00:13:39.860 "data_size": 63488 00:13:39.860 }, 00:13:39.860 { 00:13:39.860 "name": null, 00:13:39.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.860 "is_configured": false, 00:13:39.860 "data_offset": 2048, 00:13:39.860 "data_size": 63488 00:13:39.860 }, 00:13:39.860 { 00:13:39.860 "name": "BaseBdev3", 00:13:39.860 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:39.860 "is_configured": true, 00:13:39.860 "data_offset": 2048, 00:13:39.860 "data_size": 63488 00:13:39.860 }, 00:13:39.860 { 00:13:39.860 "name": "BaseBdev4", 00:13:39.860 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:39.860 "is_configured": true, 00:13:39.860 "data_offset": 2048, 00:13:39.860 "data_size": 63488 00:13:39.860 } 00:13:39.860 ] 00:13:39.860 }' 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.860 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.120 "name": "raid_bdev1", 00:13:40.120 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:40.120 "strip_size_kb": 0, 00:13:40.120 "state": "online", 00:13:40.120 "raid_level": "raid1", 00:13:40.120 "superblock": true, 00:13:40.120 "num_base_bdevs": 4, 00:13:40.120 "num_base_bdevs_discovered": 2, 00:13:40.120 "num_base_bdevs_operational": 2, 00:13:40.120 "base_bdevs_list": [ 00:13:40.120 { 00:13:40.120 "name": null, 00:13:40.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.120 "is_configured": false, 00:13:40.120 "data_offset": 0, 00:13:40.120 "data_size": 63488 00:13:40.120 }, 00:13:40.120 { 00:13:40.120 "name": null, 00:13:40.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.120 "is_configured": false, 00:13:40.120 "data_offset": 2048, 00:13:40.120 "data_size": 63488 00:13:40.120 }, 00:13:40.120 { 00:13:40.120 "name": "BaseBdev3", 00:13:40.120 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:40.120 "is_configured": true, 00:13:40.120 "data_offset": 2048, 00:13:40.120 "data_size": 63488 00:13:40.120 }, 00:13:40.120 { 00:13:40.120 "name": "BaseBdev4", 00:13:40.120 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:40.120 "is_configured": true, 00:13:40.120 "data_offset": 2048, 00:13:40.120 "data_size": 63488 00:13:40.120 } 00:13:40.120 ] 00:13:40.120 }' 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.120 [2024-11-18 10:42:05.955623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.120 [2024-11-18 10:42:05.955791] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:40.120 [2024-11-18 10:42:05.955805] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:40.120 request: 00:13:40.120 { 00:13:40.120 "base_bdev": "BaseBdev1", 00:13:40.120 "raid_bdev": "raid_bdev1", 00:13:40.120 "method": "bdev_raid_add_base_bdev", 00:13:40.120 "req_id": 1 00:13:40.120 } 00:13:40.120 Got JSON-RPC error response 00:13:40.120 response: 00:13:40.120 { 00:13:40.120 "code": -22, 00:13:40.120 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:40.120 } 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.120 10:42:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.500 10:42:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.500 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.500 "name": "raid_bdev1", 00:13:41.500 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:41.500 "strip_size_kb": 0, 00:13:41.500 "state": "online", 00:13:41.500 "raid_level": "raid1", 00:13:41.500 "superblock": true, 00:13:41.500 "num_base_bdevs": 4, 00:13:41.500 "num_base_bdevs_discovered": 2, 00:13:41.500 "num_base_bdevs_operational": 2, 00:13:41.500 "base_bdevs_list": [ 00:13:41.500 { 00:13:41.500 "name": null, 00:13:41.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.500 "is_configured": false, 00:13:41.500 "data_offset": 0, 00:13:41.500 "data_size": 63488 00:13:41.500 }, 00:13:41.500 { 00:13:41.500 "name": null, 00:13:41.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.500 "is_configured": false, 00:13:41.500 "data_offset": 2048, 00:13:41.500 "data_size": 63488 00:13:41.500 }, 00:13:41.500 { 00:13:41.500 "name": "BaseBdev3", 00:13:41.500 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:41.500 "is_configured": true, 00:13:41.500 "data_offset": 2048, 00:13:41.500 "data_size": 63488 00:13:41.500 }, 00:13:41.500 { 00:13:41.500 "name": "BaseBdev4", 00:13:41.500 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:41.500 "is_configured": true, 00:13:41.500 "data_offset": 2048, 00:13:41.500 "data_size": 63488 00:13:41.500 } 00:13:41.500 ] 00:13:41.500 }' 00:13:41.500 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.500 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.761 "name": "raid_bdev1", 00:13:41.761 "uuid": "29fb579b-15e5-4ced-afbf-b924a1642cb1", 00:13:41.761 "strip_size_kb": 0, 00:13:41.761 "state": "online", 00:13:41.761 "raid_level": "raid1", 00:13:41.761 "superblock": true, 00:13:41.761 "num_base_bdevs": 4, 00:13:41.761 "num_base_bdevs_discovered": 2, 00:13:41.761 "num_base_bdevs_operational": 2, 00:13:41.761 "base_bdevs_list": [ 00:13:41.761 { 00:13:41.761 "name": null, 00:13:41.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.761 "is_configured": false, 00:13:41.761 "data_offset": 0, 00:13:41.761 "data_size": 63488 00:13:41.761 }, 00:13:41.761 { 00:13:41.761 "name": null, 00:13:41.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.761 "is_configured": false, 00:13:41.761 "data_offset": 2048, 00:13:41.761 "data_size": 63488 00:13:41.761 }, 00:13:41.761 { 00:13:41.761 "name": "BaseBdev3", 00:13:41.761 "uuid": "5fd7593f-286d-5910-ba46-62044fd68426", 00:13:41.761 "is_configured": true, 00:13:41.761 "data_offset": 2048, 00:13:41.761 "data_size": 63488 00:13:41.761 }, 00:13:41.761 { 00:13:41.761 "name": "BaseBdev4", 00:13:41.761 "uuid": "eb324128-11c2-5714-9dfb-a708a0f27bd7", 00:13:41.761 "is_configured": true, 00:13:41.761 "data_offset": 2048, 00:13:41.761 "data_size": 63488 00:13:41.761 } 00:13:41.761 ] 00:13:41.761 }' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77802 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77802 ']' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77802 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77802 00:13:41.761 killing process with pid 77802 00:13:41.761 Received shutdown signal, test time was about 60.000000 seconds 00:13:41.761 00:13:41.761 Latency(us) 00:13:41.761 [2024-11-18T10:42:07.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.761 [2024-11-18T10:42:07.646Z] =================================================================================================================== 00:13:41.761 [2024-11-18T10:42:07.646Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77802' 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77802 00:13:41.761 [2024-11-18 10:42:07.601138] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.761 [2024-11-18 10:42:07.601260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.761 10:42:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77802 00:13:41.761 [2024-11-18 10:42:07.601324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.761 [2024-11-18 10:42:07.601333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:42.329 [2024-11-18 10:42:08.058451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.269 10:42:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:43.269 00:13:43.269 real 0m24.849s 00:13:43.269 user 0m29.750s 00:13:43.269 sys 0m3.876s 00:13:43.269 ************************************ 00:13:43.269 END TEST raid_rebuild_test_sb 00:13:43.269 ************************************ 00:13:43.269 10:42:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.269 10:42:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.269 10:42:09 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:43.269 10:42:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:43.269 10:42:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.269 10:42:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.529 ************************************ 00:13:43.529 START TEST raid_rebuild_test_io 00:13:43.529 ************************************ 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78550 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78550 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78550 ']' 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.529 10:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.529 [2024-11-18 10:42:09.271118] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:43.529 [2024-11-18 10:42:09.271340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78550 ] 00:13:43.529 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:43.529 Zero copy mechanism will not be used. 00:13:43.789 [2024-11-18 10:42:09.447589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.789 [2024-11-18 10:42:09.555980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.049 [2024-11-18 10:42:09.716442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.049 [2024-11-18 10:42:09.716571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.309 BaseBdev1_malloc 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.309 [2024-11-18 10:42:10.130249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:44.309 [2024-11-18 10:42:10.130375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.309 [2024-11-18 10:42:10.130416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:44.309 [2024-11-18 10:42:10.130448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.309 [2024-11-18 10:42:10.132492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.309 [2024-11-18 10:42:10.132577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.309 BaseBdev1 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.309 BaseBdev2_malloc 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.309 [2024-11-18 10:42:10.183691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:44.309 [2024-11-18 10:42:10.183750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.309 [2024-11-18 10:42:10.183767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:44.309 [2024-11-18 10:42:10.183777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.309 [2024-11-18 10:42:10.185736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.309 [2024-11-18 10:42:10.185775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.309 BaseBdev2 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.309 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.569 BaseBdev3_malloc 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.569 [2024-11-18 10:42:10.245133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:44.569 [2024-11-18 10:42:10.245209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.569 [2024-11-18 10:42:10.245230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:44.569 [2024-11-18 10:42:10.245241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.569 [2024-11-18 10:42:10.247237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.569 [2024-11-18 10:42:10.247273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:44.569 BaseBdev3 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.569 BaseBdev4_malloc 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.569 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.569 [2024-11-18 10:42:10.300129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:44.569 [2024-11-18 10:42:10.300196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.569 [2024-11-18 10:42:10.300231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:44.569 [2024-11-18 10:42:10.300242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.570 [2024-11-18 10:42:10.302208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.570 [2024-11-18 10:42:10.302244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:44.570 BaseBdev4 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.570 spare_malloc 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.570 spare_delay 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.570 [2024-11-18 10:42:10.361323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.570 [2024-11-18 10:42:10.361379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.570 [2024-11-18 10:42:10.361398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:44.570 [2024-11-18 10:42:10.361408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.570 [2024-11-18 10:42:10.363397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.570 [2024-11-18 10:42:10.363489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.570 spare 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.570 [2024-11-18 10:42:10.373354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.570 [2024-11-18 10:42:10.375088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.570 [2024-11-18 10:42:10.375156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.570 [2024-11-18 10:42:10.375217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.570 [2024-11-18 10:42:10.375292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:44.570 [2024-11-18 10:42:10.375306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:44.570 [2024-11-18 10:42:10.375539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:44.570 [2024-11-18 10:42:10.375709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:44.570 [2024-11-18 10:42:10.375734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:44.570 [2024-11-18 10:42:10.375872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.570 "name": "raid_bdev1", 00:13:44.570 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:44.570 "strip_size_kb": 0, 00:13:44.570 "state": "online", 00:13:44.570 "raid_level": "raid1", 00:13:44.570 "superblock": false, 00:13:44.570 "num_base_bdevs": 4, 00:13:44.570 "num_base_bdevs_discovered": 4, 00:13:44.570 "num_base_bdevs_operational": 4, 00:13:44.570 "base_bdevs_list": [ 00:13:44.570 { 00:13:44.570 "name": "BaseBdev1", 00:13:44.570 "uuid": "e5abac0b-d689-5573-9480-b0d4af79c2e3", 00:13:44.570 "is_configured": true, 00:13:44.570 "data_offset": 0, 00:13:44.570 "data_size": 65536 00:13:44.570 }, 00:13:44.570 { 00:13:44.570 "name": "BaseBdev2", 00:13:44.570 "uuid": "e8bc6bfb-68bf-5f7b-ad6d-72d63bee061d", 00:13:44.570 "is_configured": true, 00:13:44.570 "data_offset": 0, 00:13:44.570 "data_size": 65536 00:13:44.570 }, 00:13:44.570 { 00:13:44.570 "name": "BaseBdev3", 00:13:44.570 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:44.570 "is_configured": true, 00:13:44.570 "data_offset": 0, 00:13:44.570 "data_size": 65536 00:13:44.570 }, 00:13:44.570 { 00:13:44.570 "name": "BaseBdev4", 00:13:44.570 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:44.570 "is_configured": true, 00:13:44.570 "data_offset": 0, 00:13:44.570 "data_size": 65536 00:13:44.570 } 00:13:44.570 ] 00:13:44.570 }' 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.570 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.227 [2024-11-18 10:42:10.816915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.227 [2024-11-18 10:42:10.892453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.227 "name": "raid_bdev1", 00:13:45.227 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:45.227 "strip_size_kb": 0, 00:13:45.227 "state": "online", 00:13:45.227 "raid_level": "raid1", 00:13:45.227 "superblock": false, 00:13:45.227 "num_base_bdevs": 4, 00:13:45.227 "num_base_bdevs_discovered": 3, 00:13:45.227 "num_base_bdevs_operational": 3, 00:13:45.227 "base_bdevs_list": [ 00:13:45.227 { 00:13:45.227 "name": null, 00:13:45.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.227 "is_configured": false, 00:13:45.227 "data_offset": 0, 00:13:45.227 "data_size": 65536 00:13:45.227 }, 00:13:45.227 { 00:13:45.227 "name": "BaseBdev2", 00:13:45.227 "uuid": "e8bc6bfb-68bf-5f7b-ad6d-72d63bee061d", 00:13:45.227 "is_configured": true, 00:13:45.227 "data_offset": 0, 00:13:45.227 "data_size": 65536 00:13:45.227 }, 00:13:45.227 { 00:13:45.227 "name": "BaseBdev3", 00:13:45.227 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:45.227 "is_configured": true, 00:13:45.227 "data_offset": 0, 00:13:45.227 "data_size": 65536 00:13:45.227 }, 00:13:45.227 { 00:13:45.227 "name": "BaseBdev4", 00:13:45.227 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:45.227 "is_configured": true, 00:13:45.227 "data_offset": 0, 00:13:45.227 "data_size": 65536 00:13:45.227 } 00:13:45.227 ] 00:13:45.227 }' 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.227 10:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.227 [2024-11-18 10:42:10.987735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:45.227 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:45.227 Zero copy mechanism will not be used. 00:13:45.227 Running I/O for 60 seconds... 00:13:45.488 10:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:45.488 10:42:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.488 10:42:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.488 [2024-11-18 10:42:11.337570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.488 10:42:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.488 10:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:45.748 [2024-11-18 10:42:11.376999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:45.748 [2024-11-18 10:42:11.378834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.748 [2024-11-18 10:42:11.486703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:45.748 [2024-11-18 10:42:11.487124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:46.008 [2024-11-18 10:42:11.691833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:46.008 [2024-11-18 10:42:11.692658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:46.270 157.00 IOPS, 471.00 MiB/s [2024-11-18T10:42:12.155Z] [2024-11-18 10:42:12.044719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:46.530 [2024-11-18 10:42:12.154996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.530 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.790 "name": "raid_bdev1", 00:13:46.790 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:46.790 "strip_size_kb": 0, 00:13:46.790 "state": "online", 00:13:46.790 "raid_level": "raid1", 00:13:46.790 "superblock": false, 00:13:46.790 "num_base_bdevs": 4, 00:13:46.790 "num_base_bdevs_discovered": 4, 00:13:46.790 "num_base_bdevs_operational": 4, 00:13:46.790 "process": { 00:13:46.790 "type": "rebuild", 00:13:46.790 "target": "spare", 00:13:46.790 "progress": { 00:13:46.790 "blocks": 12288, 00:13:46.790 "percent": 18 00:13:46.790 } 00:13:46.790 }, 00:13:46.790 "base_bdevs_list": [ 00:13:46.790 { 00:13:46.790 "name": "spare", 00:13:46.790 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:46.790 "is_configured": true, 00:13:46.790 "data_offset": 0, 00:13:46.790 "data_size": 65536 00:13:46.790 }, 00:13:46.790 { 00:13:46.790 "name": "BaseBdev2", 00:13:46.790 "uuid": "e8bc6bfb-68bf-5f7b-ad6d-72d63bee061d", 00:13:46.790 "is_configured": true, 00:13:46.790 "data_offset": 0, 00:13:46.790 "data_size": 65536 00:13:46.790 }, 00:13:46.790 { 00:13:46.790 "name": "BaseBdev3", 00:13:46.790 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:46.790 "is_configured": true, 00:13:46.790 "data_offset": 0, 00:13:46.790 "data_size": 65536 00:13:46.790 }, 00:13:46.790 { 00:13:46.790 "name": "BaseBdev4", 00:13:46.790 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:46.790 "is_configured": true, 00:13:46.790 "data_offset": 0, 00:13:46.790 "data_size": 65536 00:13:46.790 } 00:13:46.790 ] 00:13:46.790 }' 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.790 [2024-11-18 10:42:12.483245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:46.790 [2024-11-18 10:42:12.483623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.790 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.790 [2024-11-18 10:42:12.513877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.790 [2024-11-18 10:42:12.593017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:47.051 [2024-11-18 10:42:12.698035] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.051 [2024-11-18 10:42:12.709354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.051 [2024-11-18 10:42:12.709412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.051 [2024-11-18 10:42:12.709425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.051 [2024-11-18 10:42:12.748932] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.051 "name": "raid_bdev1", 00:13:47.051 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:47.051 "strip_size_kb": 0, 00:13:47.051 "state": "online", 00:13:47.051 "raid_level": "raid1", 00:13:47.051 "superblock": false, 00:13:47.051 "num_base_bdevs": 4, 00:13:47.051 "num_base_bdevs_discovered": 3, 00:13:47.051 "num_base_bdevs_operational": 3, 00:13:47.051 "base_bdevs_list": [ 00:13:47.051 { 00:13:47.051 "name": null, 00:13:47.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.051 "is_configured": false, 00:13:47.051 "data_offset": 0, 00:13:47.051 "data_size": 65536 00:13:47.051 }, 00:13:47.051 { 00:13:47.051 "name": "BaseBdev2", 00:13:47.051 "uuid": "e8bc6bfb-68bf-5f7b-ad6d-72d63bee061d", 00:13:47.051 "is_configured": true, 00:13:47.051 "data_offset": 0, 00:13:47.051 "data_size": 65536 00:13:47.051 }, 00:13:47.051 { 00:13:47.051 "name": "BaseBdev3", 00:13:47.051 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:47.051 "is_configured": true, 00:13:47.051 "data_offset": 0, 00:13:47.051 "data_size": 65536 00:13:47.051 }, 00:13:47.051 { 00:13:47.051 "name": "BaseBdev4", 00:13:47.051 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:47.051 "is_configured": true, 00:13:47.051 "data_offset": 0, 00:13:47.051 "data_size": 65536 00:13:47.051 } 00:13:47.051 ] 00:13:47.051 }' 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.051 10:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 144.00 IOPS, 432.00 MiB/s [2024-11-18T10:42:13.456Z] 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.571 "name": "raid_bdev1", 00:13:47.571 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:47.571 "strip_size_kb": 0, 00:13:47.571 "state": "online", 00:13:47.571 "raid_level": "raid1", 00:13:47.571 "superblock": false, 00:13:47.571 "num_base_bdevs": 4, 00:13:47.571 "num_base_bdevs_discovered": 3, 00:13:47.571 "num_base_bdevs_operational": 3, 00:13:47.571 "base_bdevs_list": [ 00:13:47.571 { 00:13:47.571 "name": null, 00:13:47.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.571 "is_configured": false, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "name": "BaseBdev2", 00:13:47.571 "uuid": "e8bc6bfb-68bf-5f7b-ad6d-72d63bee061d", 00:13:47.571 "is_configured": true, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "name": "BaseBdev3", 00:13:47.571 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:47.571 "is_configured": true, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 }, 00:13:47.571 { 00:13:47.571 "name": "BaseBdev4", 00:13:47.571 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:47.571 "is_configured": true, 00:13:47.571 "data_offset": 0, 00:13:47.571 "data_size": 65536 00:13:47.571 } 00:13:47.571 ] 00:13:47.571 }' 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.571 [2024-11-18 10:42:13.337054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.571 10:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:47.571 [2024-11-18 10:42:13.400674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:47.571 [2024-11-18 10:42:13.402572] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.831 [2024-11-18 10:42:13.515857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:47.831 [2024-11-18 10:42:13.517167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:48.091 [2024-11-18 10:42:13.747337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.091 [2024-11-18 10:42:13.747972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.351 156.67 IOPS, 470.00 MiB/s [2024-11-18T10:42:14.236Z] [2024-11-18 10:42:14.082769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:48.610 [2024-11-18 10:42:14.295207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:48.610 [2024-11-18 10:42:14.295866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.610 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.610 "name": "raid_bdev1", 00:13:48.610 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:48.610 "strip_size_kb": 0, 00:13:48.610 "state": "online", 00:13:48.610 "raid_level": "raid1", 00:13:48.610 "superblock": false, 00:13:48.610 "num_base_bdevs": 4, 00:13:48.610 "num_base_bdevs_discovered": 4, 00:13:48.610 "num_base_bdevs_operational": 4, 00:13:48.610 "process": { 00:13:48.610 "type": "rebuild", 00:13:48.610 "target": "spare", 00:13:48.610 "progress": { 00:13:48.610 "blocks": 10240, 00:13:48.610 "percent": 15 00:13:48.610 } 00:13:48.610 }, 00:13:48.610 "base_bdevs_list": [ 00:13:48.610 { 00:13:48.610 "name": "spare", 00:13:48.611 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:48.611 "is_configured": true, 00:13:48.611 "data_offset": 0, 00:13:48.611 "data_size": 65536 00:13:48.611 }, 00:13:48.611 { 00:13:48.611 "name": "BaseBdev2", 00:13:48.611 "uuid": "e8bc6bfb-68bf-5f7b-ad6d-72d63bee061d", 00:13:48.611 "is_configured": true, 00:13:48.611 "data_offset": 0, 00:13:48.611 "data_size": 65536 00:13:48.611 }, 00:13:48.611 { 00:13:48.611 "name": "BaseBdev3", 00:13:48.611 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:48.611 "is_configured": true, 00:13:48.611 "data_offset": 0, 00:13:48.611 "data_size": 65536 00:13:48.611 }, 00:13:48.611 { 00:13:48.611 "name": "BaseBdev4", 00:13:48.611 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:48.611 "is_configured": true, 00:13:48.611 "data_offset": 0, 00:13:48.611 "data_size": 65536 00:13:48.611 } 00:13:48.611 ] 00:13:48.611 }' 00:13:48.611 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.611 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.611 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.871 [2024-11-18 10:42:14.536937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:48.871 [2024-11-18 10:42:14.685904] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:48.871 [2024-11-18 10:42:14.685992] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:48.871 [2024-11-18 10:42:14.699857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.871 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.871 "name": "raid_bdev1", 00:13:48.871 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:48.871 "strip_size_kb": 0, 00:13:48.871 "state": "online", 00:13:48.871 "raid_level": "raid1", 00:13:48.871 "superblock": false, 00:13:48.871 "num_base_bdevs": 4, 00:13:48.871 "num_base_bdevs_discovered": 3, 00:13:48.871 "num_base_bdevs_operational": 3, 00:13:48.871 "process": { 00:13:48.871 "type": "rebuild", 00:13:48.871 "target": "spare", 00:13:48.871 "progress": { 00:13:48.871 "blocks": 14336, 00:13:48.871 "percent": 21 00:13:48.871 } 00:13:48.871 }, 00:13:48.871 "base_bdevs_list": [ 00:13:48.871 { 00:13:48.871 "name": "spare", 00:13:48.871 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:48.871 "is_configured": true, 00:13:48.871 "data_offset": 0, 00:13:48.871 "data_size": 65536 00:13:48.871 }, 00:13:48.871 { 00:13:48.871 "name": null, 00:13:48.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.871 "is_configured": false, 00:13:48.871 "data_offset": 0, 00:13:48.871 "data_size": 65536 00:13:48.871 }, 00:13:48.871 { 00:13:48.871 "name": "BaseBdev3", 00:13:48.871 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:48.871 "is_configured": true, 00:13:48.871 "data_offset": 0, 00:13:48.871 "data_size": 65536 00:13:48.871 }, 00:13:48.871 { 00:13:48.871 "name": "BaseBdev4", 00:13:48.871 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:48.871 "is_configured": true, 00:13:48.871 "data_offset": 0, 00:13:48.871 "data_size": 65536 00:13:48.871 } 00:13:48.871 ] 00:13:48.871 }' 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.131 [2024-11-18 10:42:14.835031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=476 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.131 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.131 "name": "raid_bdev1", 00:13:49.132 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:49.132 "strip_size_kb": 0, 00:13:49.132 "state": "online", 00:13:49.132 "raid_level": "raid1", 00:13:49.132 "superblock": false, 00:13:49.132 "num_base_bdevs": 4, 00:13:49.132 "num_base_bdevs_discovered": 3, 00:13:49.132 "num_base_bdevs_operational": 3, 00:13:49.132 "process": { 00:13:49.132 "type": "rebuild", 00:13:49.132 "target": "spare", 00:13:49.132 "progress": { 00:13:49.132 "blocks": 16384, 00:13:49.132 "percent": 25 00:13:49.132 } 00:13:49.132 }, 00:13:49.132 "base_bdevs_list": [ 00:13:49.132 { 00:13:49.132 "name": "spare", 00:13:49.132 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:49.132 "is_configured": true, 00:13:49.132 "data_offset": 0, 00:13:49.132 "data_size": 65536 00:13:49.132 }, 00:13:49.132 { 00:13:49.132 "name": null, 00:13:49.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.132 "is_configured": false, 00:13:49.132 "data_offset": 0, 00:13:49.132 "data_size": 65536 00:13:49.132 }, 00:13:49.132 { 00:13:49.132 "name": "BaseBdev3", 00:13:49.132 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:49.132 "is_configured": true, 00:13:49.132 "data_offset": 0, 00:13:49.132 "data_size": 65536 00:13:49.132 }, 00:13:49.132 { 00:13:49.132 "name": "BaseBdev4", 00:13:49.132 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:49.132 "is_configured": true, 00:13:49.132 "data_offset": 0, 00:13:49.132 "data_size": 65536 00:13:49.132 } 00:13:49.132 ] 00:13:49.132 }' 00:13:49.132 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.132 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.132 10:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.132 136.50 IOPS, 409.50 MiB/s [2024-11-18T10:42:15.017Z] 10:42:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.132 10:42:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.392 [2024-11-18 10:42:15.154894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:49.651 [2024-11-18 10:42:15.377029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:49.912 [2024-11-18 10:42:15.597155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:49.912 [2024-11-18 10:42:15.598063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:50.172 [2024-11-18 10:42:15.825024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:50.172 121.60 IOPS, 364.80 MiB/s [2024-11-18T10:42:16.057Z] 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.172 10:42:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.432 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.432 "name": "raid_bdev1", 00:13:50.432 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:50.432 "strip_size_kb": 0, 00:13:50.432 "state": "online", 00:13:50.432 "raid_level": "raid1", 00:13:50.432 "superblock": false, 00:13:50.432 "num_base_bdevs": 4, 00:13:50.432 "num_base_bdevs_discovered": 3, 00:13:50.432 "num_base_bdevs_operational": 3, 00:13:50.432 "process": { 00:13:50.432 "type": "rebuild", 00:13:50.432 "target": "spare", 00:13:50.432 "progress": { 00:13:50.432 "blocks": 28672, 00:13:50.432 "percent": 43 00:13:50.432 } 00:13:50.432 }, 00:13:50.432 "base_bdevs_list": [ 00:13:50.432 { 00:13:50.432 "name": "spare", 00:13:50.432 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:50.432 "is_configured": true, 00:13:50.432 "data_offset": 0, 00:13:50.432 "data_size": 65536 00:13:50.432 }, 00:13:50.432 { 00:13:50.432 "name": null, 00:13:50.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.432 "is_configured": false, 00:13:50.432 "data_offset": 0, 00:13:50.432 "data_size": 65536 00:13:50.432 }, 00:13:50.432 { 00:13:50.432 "name": "BaseBdev3", 00:13:50.432 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:50.432 "is_configured": true, 00:13:50.432 "data_offset": 0, 00:13:50.432 "data_size": 65536 00:13:50.432 }, 00:13:50.432 { 00:13:50.432 "name": "BaseBdev4", 00:13:50.432 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:50.432 "is_configured": true, 00:13:50.432 "data_offset": 0, 00:13:50.432 "data_size": 65536 00:13:50.432 } 00:13:50.432 ] 00:13:50.432 }' 00:13:50.432 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.432 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.432 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.432 [2024-11-18 10:42:16.148026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:50.432 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.432 10:42:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.692 [2024-11-18 10:42:16.361203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:50.692 [2024-11-18 10:42:16.361700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:50.952 [2024-11-18 10:42:16.685482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:51.471 106.17 IOPS, 318.50 MiB/s [2024-11-18T10:42:17.356Z] 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.471 "name": "raid_bdev1", 00:13:51.471 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:51.471 "strip_size_kb": 0, 00:13:51.471 "state": "online", 00:13:51.471 "raid_level": "raid1", 00:13:51.471 "superblock": false, 00:13:51.471 "num_base_bdevs": 4, 00:13:51.471 "num_base_bdevs_discovered": 3, 00:13:51.471 "num_base_bdevs_operational": 3, 00:13:51.471 "process": { 00:13:51.471 "type": "rebuild", 00:13:51.471 "target": "spare", 00:13:51.471 "progress": { 00:13:51.471 "blocks": 45056, 00:13:51.471 "percent": 68 00:13:51.471 } 00:13:51.471 }, 00:13:51.471 "base_bdevs_list": [ 00:13:51.471 { 00:13:51.471 "name": "spare", 00:13:51.471 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:51.471 "is_configured": true, 00:13:51.471 "data_offset": 0, 00:13:51.471 "data_size": 65536 00:13:51.471 }, 00:13:51.471 { 00:13:51.471 "name": null, 00:13:51.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.471 "is_configured": false, 00:13:51.471 "data_offset": 0, 00:13:51.471 "data_size": 65536 00:13:51.471 }, 00:13:51.471 { 00:13:51.471 "name": "BaseBdev3", 00:13:51.471 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:51.471 "is_configured": true, 00:13:51.471 "data_offset": 0, 00:13:51.471 "data_size": 65536 00:13:51.471 }, 00:13:51.471 { 00:13:51.471 "name": "BaseBdev4", 00:13:51.471 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:51.471 "is_configured": true, 00:13:51.471 "data_offset": 0, 00:13:51.471 "data_size": 65536 00:13:51.471 } 00:13:51.471 ] 00:13:51.471 }' 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.471 10:42:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.411 96.14 IOPS, 288.43 MiB/s [2024-11-18T10:42:18.296Z] [2024-11-18 10:42:18.200260] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.671 [2024-11-18 10:42:18.305387] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.671 [2024-11-18 10:42:18.309156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.671 "name": "raid_bdev1", 00:13:52.671 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:52.671 "strip_size_kb": 0, 00:13:52.671 "state": "online", 00:13:52.671 "raid_level": "raid1", 00:13:52.671 "superblock": false, 00:13:52.671 "num_base_bdevs": 4, 00:13:52.671 "num_base_bdevs_discovered": 3, 00:13:52.671 "num_base_bdevs_operational": 3, 00:13:52.671 "base_bdevs_list": [ 00:13:52.671 { 00:13:52.671 "name": "spare", 00:13:52.671 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:52.671 "is_configured": true, 00:13:52.671 "data_offset": 0, 00:13:52.671 "data_size": 65536 00:13:52.671 }, 00:13:52.671 { 00:13:52.671 "name": null, 00:13:52.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.671 "is_configured": false, 00:13:52.671 "data_offset": 0, 00:13:52.671 "data_size": 65536 00:13:52.671 }, 00:13:52.671 { 00:13:52.671 "name": "BaseBdev3", 00:13:52.671 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:52.671 "is_configured": true, 00:13:52.671 "data_offset": 0, 00:13:52.671 "data_size": 65536 00:13:52.671 }, 00:13:52.671 { 00:13:52.671 "name": "BaseBdev4", 00:13:52.671 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:52.671 "is_configured": true, 00:13:52.671 "data_offset": 0, 00:13:52.671 "data_size": 65536 00:13:52.671 } 00:13:52.671 ] 00:13:52.671 }' 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.671 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.672 "name": "raid_bdev1", 00:13:52.672 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:52.672 "strip_size_kb": 0, 00:13:52.672 "state": "online", 00:13:52.672 "raid_level": "raid1", 00:13:52.672 "superblock": false, 00:13:52.672 "num_base_bdevs": 4, 00:13:52.672 "num_base_bdevs_discovered": 3, 00:13:52.672 "num_base_bdevs_operational": 3, 00:13:52.672 "base_bdevs_list": [ 00:13:52.672 { 00:13:52.672 "name": "spare", 00:13:52.672 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:52.672 "is_configured": true, 00:13:52.672 "data_offset": 0, 00:13:52.672 "data_size": 65536 00:13:52.672 }, 00:13:52.672 { 00:13:52.672 "name": null, 00:13:52.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.672 "is_configured": false, 00:13:52.672 "data_offset": 0, 00:13:52.672 "data_size": 65536 00:13:52.672 }, 00:13:52.672 { 00:13:52.672 "name": "BaseBdev3", 00:13:52.672 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:52.672 "is_configured": true, 00:13:52.672 "data_offset": 0, 00:13:52.672 "data_size": 65536 00:13:52.672 }, 00:13:52.672 { 00:13:52.672 "name": "BaseBdev4", 00:13:52.672 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:52.672 "is_configured": true, 00:13:52.672 "data_offset": 0, 00:13:52.672 "data_size": 65536 00:13:52.672 } 00:13:52.672 ] 00:13:52.672 }' 00:13:52.672 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.938 "name": "raid_bdev1", 00:13:52.938 "uuid": "78e47eec-4da1-4581-9498-40d1186e0252", 00:13:52.938 "strip_size_kb": 0, 00:13:52.938 "state": "online", 00:13:52.938 "raid_level": "raid1", 00:13:52.938 "superblock": false, 00:13:52.938 "num_base_bdevs": 4, 00:13:52.938 "num_base_bdevs_discovered": 3, 00:13:52.938 "num_base_bdevs_operational": 3, 00:13:52.938 "base_bdevs_list": [ 00:13:52.938 { 00:13:52.938 "name": "spare", 00:13:52.938 "uuid": "03ed5206-d0e6-5955-b5a3-fdfc8edf438e", 00:13:52.938 "is_configured": true, 00:13:52.938 "data_offset": 0, 00:13:52.938 "data_size": 65536 00:13:52.938 }, 00:13:52.938 { 00:13:52.938 "name": null, 00:13:52.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.938 "is_configured": false, 00:13:52.938 "data_offset": 0, 00:13:52.938 "data_size": 65536 00:13:52.938 }, 00:13:52.938 { 00:13:52.938 "name": "BaseBdev3", 00:13:52.938 "uuid": "a67a7779-6f4b-548a-ae76-565f5a00cd66", 00:13:52.938 "is_configured": true, 00:13:52.938 "data_offset": 0, 00:13:52.938 "data_size": 65536 00:13:52.938 }, 00:13:52.938 { 00:13:52.938 "name": "BaseBdev4", 00:13:52.938 "uuid": "c58cbaf9-107a-5985-8eb2-68a1f5e41577", 00:13:52.938 "is_configured": true, 00:13:52.938 "data_offset": 0, 00:13:52.938 "data_size": 65536 00:13:52.938 } 00:13:52.938 ] 00:13:52.938 }' 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.938 10:42:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.204 89.12 IOPS, 267.38 MiB/s [2024-11-18T10:42:19.089Z] 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.204 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.204 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.204 [2024-11-18 10:42:19.050106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.204 [2024-11-18 10:42:19.050137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.204 00:13:53.204 Latency(us) 00:13:53.204 [2024-11-18T10:42:19.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.204 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:53.204 raid_bdev1 : 8.09 88.40 265.21 0.00 0.00 16156.96 287.97 116762.83 00:13:53.204 [2024-11-18T10:42:19.089Z] =================================================================================================================== 00:13:53.204 [2024-11-18T10:42:19.089Z] Total : 88.40 265.21 0.00 0.00 16156.96 287.97 116762.83 00:13:53.204 [2024-11-18 10:42:19.081650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.204 [2024-11-18 10:42:19.081689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.204 [2024-11-18 10:42:19.081780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.204 [2024-11-18 10:42:19.081789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.204 { 00:13:53.204 "results": [ 00:13:53.204 { 00:13:53.204 "job": "raid_bdev1", 00:13:53.204 "core_mask": "0x1", 00:13:53.204 "workload": "randrw", 00:13:53.204 "percentage": 50, 00:13:53.204 "status": "finished", 00:13:53.204 "queue_depth": 2, 00:13:53.204 "io_size": 3145728, 00:13:53.204 "runtime": 8.08798, 00:13:53.204 "iops": 88.40279031352699, 00:13:53.204 "mibps": 265.20837094058095, 00:13:53.204 "io_failed": 0, 00:13:53.204 "io_timeout": 0, 00:13:53.204 "avg_latency_us": 16156.958245946193, 00:13:53.204 "min_latency_us": 287.97205240174674, 00:13:53.204 "max_latency_us": 116762.82969432314 00:13:53.204 } 00:13:53.204 ], 00:13:53.204 "core_count": 1 00:13:53.204 } 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.464 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:53.464 /dev/nbd0 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.724 1+0 records in 00:13:53.724 1+0 records out 00:13:53.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474878 s, 8.6 MB/s 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.724 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:53.724 /dev/nbd1 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.984 1+0 records in 00:13:53.984 1+0 records out 00:13:53.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542857 s, 7.5 MB/s 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.984 10:42:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.245 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:54.505 /dev/nbd1 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.505 1+0 records in 00:13:54.505 1+0 records out 00:13:54.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434202 s, 9.4 MB/s 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.505 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.766 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78550 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78550 ']' 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78550 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:55.026 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.027 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78550 00:13:55.027 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.027 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.027 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78550' 00:13:55.027 killing process with pid 78550 00:13:55.027 Received shutdown signal, test time was about 9.824845 seconds 00:13:55.027 00:13:55.027 Latency(us) 00:13:55.027 [2024-11-18T10:42:20.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.027 [2024-11-18T10:42:20.912Z] =================================================================================================================== 00:13:55.027 [2024-11-18T10:42:20.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:55.027 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78550 00:13:55.027 [2024-11-18 10:42:20.795543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.027 10:42:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78550 00:13:55.598 [2024-11-18 10:42:21.188117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:56.539 00:13:56.539 real 0m13.111s 00:13:56.539 user 0m16.523s 00:13:56.539 sys 0m1.830s 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.539 ************************************ 00:13:56.539 END TEST raid_rebuild_test_io 00:13:56.539 ************************************ 00:13:56.539 10:42:22 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:56.539 10:42:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:56.539 10:42:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.539 10:42:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.539 ************************************ 00:13:56.539 START TEST raid_rebuild_test_sb_io 00:13:56.539 ************************************ 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78959 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78959 00:13:56.539 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78959 ']' 00:13:56.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.540 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.540 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.540 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.540 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.540 10:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.799 [2024-11-18 10:42:22.453255] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:56.799 [2024-11-18 10:42:22.453470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:56.799 Zero copy mechanism will not be used. 00:13:56.799 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78959 ] 00:13:56.799 [2024-11-18 10:42:22.628038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.060 [2024-11-18 10:42:22.736849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.060 [2024-11-18 10:42:22.931444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.060 [2024-11-18 10:42:22.931531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.629 BaseBdev1_malloc 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.629 [2024-11-18 10:42:23.305752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:57.629 [2024-11-18 10:42:23.305835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.629 [2024-11-18 10:42:23.305861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:57.629 [2024-11-18 10:42:23.305872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.629 [2024-11-18 10:42:23.308047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.629 [2024-11-18 10:42:23.308088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.629 BaseBdev1 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.629 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 BaseBdev2_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 [2024-11-18 10:42:23.358836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:57.630 [2024-11-18 10:42:23.358908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.630 [2024-11-18 10:42:23.358927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:57.630 [2024-11-18 10:42:23.358939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.630 [2024-11-18 10:42:23.361066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.630 [2024-11-18 10:42:23.361139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:57.630 BaseBdev2 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 BaseBdev3_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 [2024-11-18 10:42:23.442771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:57.630 [2024-11-18 10:42:23.442825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.630 [2024-11-18 10:42:23.442863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:57.630 [2024-11-18 10:42:23.442875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.630 [2024-11-18 10:42:23.444882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.630 [2024-11-18 10:42:23.444977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:57.630 BaseBdev3 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 BaseBdev4_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.630 [2024-11-18 10:42:23.499155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:57.630 [2024-11-18 10:42:23.499217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.630 [2024-11-18 10:42:23.499252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:57.630 [2024-11-18 10:42:23.499261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.630 [2024-11-18 10:42:23.501211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.630 [2024-11-18 10:42:23.501282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:57.630 BaseBdev4 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.630 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.891 spare_malloc 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.891 spare_delay 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.891 [2024-11-18 10:42:23.562336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.891 [2024-11-18 10:42:23.562389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.891 [2024-11-18 10:42:23.562407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:57.891 [2024-11-18 10:42:23.562417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.891 [2024-11-18 10:42:23.564396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.891 [2024-11-18 10:42:23.564437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.891 spare 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.891 [2024-11-18 10:42:23.574366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.891 [2024-11-18 10:42:23.576054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.891 [2024-11-18 10:42:23.576121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.891 [2024-11-18 10:42:23.576169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.891 [2024-11-18 10:42:23.576357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:57.891 [2024-11-18 10:42:23.576374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:57.891 [2024-11-18 10:42:23.576604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:57.891 [2024-11-18 10:42:23.576770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:57.891 [2024-11-18 10:42:23.576780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:57.891 [2024-11-18 10:42:23.576920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.891 "name": "raid_bdev1", 00:13:57.891 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:13:57.891 "strip_size_kb": 0, 00:13:57.891 "state": "online", 00:13:57.891 "raid_level": "raid1", 00:13:57.891 "superblock": true, 00:13:57.891 "num_base_bdevs": 4, 00:13:57.891 "num_base_bdevs_discovered": 4, 00:13:57.891 "num_base_bdevs_operational": 4, 00:13:57.891 "base_bdevs_list": [ 00:13:57.891 { 00:13:57.891 "name": "BaseBdev1", 00:13:57.891 "uuid": "9b85bf05-face-5fbe-8566-83bdc0dc3700", 00:13:57.891 "is_configured": true, 00:13:57.891 "data_offset": 2048, 00:13:57.891 "data_size": 63488 00:13:57.891 }, 00:13:57.891 { 00:13:57.891 "name": "BaseBdev2", 00:13:57.891 "uuid": "baf6daef-12a9-53f1-ac10-75349787659b", 00:13:57.891 "is_configured": true, 00:13:57.891 "data_offset": 2048, 00:13:57.891 "data_size": 63488 00:13:57.891 }, 00:13:57.891 { 00:13:57.891 "name": "BaseBdev3", 00:13:57.891 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:13:57.891 "is_configured": true, 00:13:57.891 "data_offset": 2048, 00:13:57.891 "data_size": 63488 00:13:57.891 }, 00:13:57.891 { 00:13:57.891 "name": "BaseBdev4", 00:13:57.891 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:13:57.891 "is_configured": true, 00:13:57.891 "data_offset": 2048, 00:13:57.891 "data_size": 63488 00:13:57.891 } 00:13:57.891 ] 00:13:57.891 }' 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.891 10:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.151 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.151 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.151 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.151 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:58.151 [2024-11-18 10:42:24.029863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.413 [2024-11-18 10:42:24.121375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.413 "name": "raid_bdev1", 00:13:58.413 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:13:58.413 "strip_size_kb": 0, 00:13:58.413 "state": "online", 00:13:58.413 "raid_level": "raid1", 00:13:58.413 "superblock": true, 00:13:58.413 "num_base_bdevs": 4, 00:13:58.413 "num_base_bdevs_discovered": 3, 00:13:58.413 "num_base_bdevs_operational": 3, 00:13:58.413 "base_bdevs_list": [ 00:13:58.413 { 00:13:58.413 "name": null, 00:13:58.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.413 "is_configured": false, 00:13:58.413 "data_offset": 0, 00:13:58.413 "data_size": 63488 00:13:58.413 }, 00:13:58.413 { 00:13:58.413 "name": "BaseBdev2", 00:13:58.413 "uuid": "baf6daef-12a9-53f1-ac10-75349787659b", 00:13:58.413 "is_configured": true, 00:13:58.413 "data_offset": 2048, 00:13:58.413 "data_size": 63488 00:13:58.413 }, 00:13:58.413 { 00:13:58.413 "name": "BaseBdev3", 00:13:58.413 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:13:58.413 "is_configured": true, 00:13:58.413 "data_offset": 2048, 00:13:58.413 "data_size": 63488 00:13:58.413 }, 00:13:58.413 { 00:13:58.413 "name": "BaseBdev4", 00:13:58.413 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:13:58.413 "is_configured": true, 00:13:58.413 "data_offset": 2048, 00:13:58.413 "data_size": 63488 00:13:58.413 } 00:13:58.413 ] 00:13:58.413 }' 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.413 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.413 [2024-11-18 10:42:24.217075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:58.413 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.413 Zero copy mechanism will not be used. 00:13:58.413 Running I/O for 60 seconds... 00:13:58.984 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.984 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.984 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.984 [2024-11-18 10:42:24.591343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.984 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.984 10:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:58.984 [2024-11-18 10:42:24.644773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:58.984 [2024-11-18 10:42:24.646682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.984 [2024-11-18 10:42:24.754946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.984 [2024-11-18 10:42:24.755442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:59.244 [2024-11-18 10:42:24.878201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.244 [2024-11-18 10:42:24.878918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.504 166.00 IOPS, 498.00 MiB/s [2024-11-18T10:42:25.389Z] [2024-11-18 10:42:25.224323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:59.504 [2024-11-18 10:42:25.225539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:59.765 [2024-11-18 10:42:25.441008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.765 [2024-11-18 10:42:25.441683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.765 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.026 "name": "raid_bdev1", 00:14:00.026 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:00.026 "strip_size_kb": 0, 00:14:00.026 "state": "online", 00:14:00.026 "raid_level": "raid1", 00:14:00.026 "superblock": true, 00:14:00.026 "num_base_bdevs": 4, 00:14:00.026 "num_base_bdevs_discovered": 4, 00:14:00.026 "num_base_bdevs_operational": 4, 00:14:00.026 "process": { 00:14:00.026 "type": "rebuild", 00:14:00.026 "target": "spare", 00:14:00.026 "progress": { 00:14:00.026 "blocks": 10240, 00:14:00.026 "percent": 16 00:14:00.026 } 00:14:00.026 }, 00:14:00.026 "base_bdevs_list": [ 00:14:00.026 { 00:14:00.026 "name": "spare", 00:14:00.026 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:00.026 "is_configured": true, 00:14:00.026 "data_offset": 2048, 00:14:00.026 "data_size": 63488 00:14:00.026 }, 00:14:00.026 { 00:14:00.026 "name": "BaseBdev2", 00:14:00.026 "uuid": "baf6daef-12a9-53f1-ac10-75349787659b", 00:14:00.026 "is_configured": true, 00:14:00.026 "data_offset": 2048, 00:14:00.026 "data_size": 63488 00:14:00.026 }, 00:14:00.026 { 00:14:00.026 "name": "BaseBdev3", 00:14:00.026 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:00.026 "is_configured": true, 00:14:00.026 "data_offset": 2048, 00:14:00.026 "data_size": 63488 00:14:00.026 }, 00:14:00.026 { 00:14:00.026 "name": "BaseBdev4", 00:14:00.026 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:00.026 "is_configured": true, 00:14:00.026 "data_offset": 2048, 00:14:00.026 "data_size": 63488 00:14:00.026 } 00:14:00.026 ] 00:14:00.026 }' 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:00.026 [2024-11-18 10:42:25.776532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.026 10:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.026 [2024-11-18 10:42:25.792526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.026 [2024-11-18 10:42:25.898820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:00.026 [2024-11-18 10:42:25.899608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:00.287 [2024-11-18 10:42:26.012900] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.287 [2024-11-18 10:42:26.030451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.287 [2024-11-18 10:42:26.030562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.287 [2024-11-18 10:42:26.030593] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.287 [2024-11-18 10:42:26.053353] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.287 "name": "raid_bdev1", 00:14:00.287 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:00.287 "strip_size_kb": 0, 00:14:00.287 "state": "online", 00:14:00.287 "raid_level": "raid1", 00:14:00.287 "superblock": true, 00:14:00.287 "num_base_bdevs": 4, 00:14:00.287 "num_base_bdevs_discovered": 3, 00:14:00.287 "num_base_bdevs_operational": 3, 00:14:00.287 "base_bdevs_list": [ 00:14:00.287 { 00:14:00.287 "name": null, 00:14:00.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.287 "is_configured": false, 00:14:00.287 "data_offset": 0, 00:14:00.287 "data_size": 63488 00:14:00.287 }, 00:14:00.287 { 00:14:00.287 "name": "BaseBdev2", 00:14:00.287 "uuid": "baf6daef-12a9-53f1-ac10-75349787659b", 00:14:00.287 "is_configured": true, 00:14:00.287 "data_offset": 2048, 00:14:00.287 "data_size": 63488 00:14:00.287 }, 00:14:00.287 { 00:14:00.287 "name": "BaseBdev3", 00:14:00.287 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:00.287 "is_configured": true, 00:14:00.287 "data_offset": 2048, 00:14:00.287 "data_size": 63488 00:14:00.287 }, 00:14:00.287 { 00:14:00.287 "name": "BaseBdev4", 00:14:00.287 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:00.287 "is_configured": true, 00:14:00.287 "data_offset": 2048, 00:14:00.287 "data_size": 63488 00:14:00.287 } 00:14:00.287 ] 00:14:00.287 }' 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.287 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.808 154.50 IOPS, 463.50 MiB/s [2024-11-18T10:42:26.693Z] 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.808 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.808 "name": "raid_bdev1", 00:14:00.808 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:00.808 "strip_size_kb": 0, 00:14:00.808 "state": "online", 00:14:00.808 "raid_level": "raid1", 00:14:00.808 "superblock": true, 00:14:00.808 "num_base_bdevs": 4, 00:14:00.808 "num_base_bdevs_discovered": 3, 00:14:00.808 "num_base_bdevs_operational": 3, 00:14:00.808 "base_bdevs_list": [ 00:14:00.808 { 00:14:00.808 "name": null, 00:14:00.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.808 "is_configured": false, 00:14:00.808 "data_offset": 0, 00:14:00.808 "data_size": 63488 00:14:00.808 }, 00:14:00.808 { 00:14:00.808 "name": "BaseBdev2", 00:14:00.808 "uuid": "baf6daef-12a9-53f1-ac10-75349787659b", 00:14:00.808 "is_configured": true, 00:14:00.808 "data_offset": 2048, 00:14:00.808 "data_size": 63488 00:14:00.808 }, 00:14:00.808 { 00:14:00.809 "name": "BaseBdev3", 00:14:00.809 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:00.809 "is_configured": true, 00:14:00.809 "data_offset": 2048, 00:14:00.809 "data_size": 63488 00:14:00.809 }, 00:14:00.809 { 00:14:00.809 "name": "BaseBdev4", 00:14:00.809 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:00.809 "is_configured": true, 00:14:00.809 "data_offset": 2048, 00:14:00.809 "data_size": 63488 00:14:00.809 } 00:14:00.809 ] 00:14:00.809 }' 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.809 [2024-11-18 10:42:26.646796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.809 10:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:01.068 [2024-11-18 10:42:26.703879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:01.068 [2024-11-18 10:42:26.705781] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.068 [2024-11-18 10:42:26.838445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:01.068 [2024-11-18 10:42:26.839767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:01.329 [2024-11-18 10:42:27.066972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.329 [2024-11-18 10:42:27.067350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.848 164.33 IOPS, 493.00 MiB/s [2024-11-18T10:42:27.733Z] [2024-11-18 10:42:27.518477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.848 [2024-11-18 10:42:27.518775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.848 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.109 "name": "raid_bdev1", 00:14:02.109 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:02.109 "strip_size_kb": 0, 00:14:02.109 "state": "online", 00:14:02.109 "raid_level": "raid1", 00:14:02.109 "superblock": true, 00:14:02.109 "num_base_bdevs": 4, 00:14:02.109 "num_base_bdevs_discovered": 4, 00:14:02.109 "num_base_bdevs_operational": 4, 00:14:02.109 "process": { 00:14:02.109 "type": "rebuild", 00:14:02.109 "target": "spare", 00:14:02.109 "progress": { 00:14:02.109 "blocks": 12288, 00:14:02.109 "percent": 19 00:14:02.109 } 00:14:02.109 }, 00:14:02.109 "base_bdevs_list": [ 00:14:02.109 { 00:14:02.109 "name": "spare", 00:14:02.109 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:02.109 "is_configured": true, 00:14:02.109 "data_offset": 2048, 00:14:02.109 "data_size": 63488 00:14:02.109 }, 00:14:02.109 { 00:14:02.109 "name": "BaseBdev2", 00:14:02.109 "uuid": "baf6daef-12a9-53f1-ac10-75349787659b", 00:14:02.109 "is_configured": true, 00:14:02.109 "data_offset": 2048, 00:14:02.109 "data_size": 63488 00:14:02.109 }, 00:14:02.109 { 00:14:02.109 "name": "BaseBdev3", 00:14:02.109 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:02.109 "is_configured": true, 00:14:02.109 "data_offset": 2048, 00:14:02.109 "data_size": 63488 00:14:02.109 }, 00:14:02.109 { 00:14:02.109 "name": "BaseBdev4", 00:14:02.109 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:02.109 "is_configured": true, 00:14:02.109 "data_offset": 2048, 00:14:02.109 "data_size": 63488 00:14:02.109 } 00:14:02.109 ] 00:14:02.109 }' 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:02.109 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.109 10:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.109 [2024-11-18 10:42:27.849096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.369 [2024-11-18 10:42:27.996843] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:02.369 [2024-11-18 10:42:27.996948] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.369 "name": "raid_bdev1", 00:14:02.369 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:02.369 "strip_size_kb": 0, 00:14:02.369 "state": "online", 00:14:02.369 "raid_level": "raid1", 00:14:02.369 "superblock": true, 00:14:02.369 "num_base_bdevs": 4, 00:14:02.369 "num_base_bdevs_discovered": 3, 00:14:02.369 "num_base_bdevs_operational": 3, 00:14:02.369 "process": { 00:14:02.369 "type": "rebuild", 00:14:02.369 "target": "spare", 00:14:02.369 "progress": { 00:14:02.369 "blocks": 16384, 00:14:02.369 "percent": 25 00:14:02.369 } 00:14:02.369 }, 00:14:02.369 "base_bdevs_list": [ 00:14:02.369 { 00:14:02.369 "name": "spare", 00:14:02.369 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:02.369 "is_configured": true, 00:14:02.369 "data_offset": 2048, 00:14:02.369 "data_size": 63488 00:14:02.369 }, 00:14:02.369 { 00:14:02.369 "name": null, 00:14:02.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.369 "is_configured": false, 00:14:02.369 "data_offset": 0, 00:14:02.369 "data_size": 63488 00:14:02.369 }, 00:14:02.369 { 00:14:02.369 "name": "BaseBdev3", 00:14:02.369 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:02.369 "is_configured": true, 00:14:02.369 "data_offset": 2048, 00:14:02.369 "data_size": 63488 00:14:02.369 }, 00:14:02.369 { 00:14:02.369 "name": "BaseBdev4", 00:14:02.369 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:02.369 "is_configured": true, 00:14:02.369 "data_offset": 2048, 00:14:02.369 "data_size": 63488 00:14:02.369 } 00:14:02.369 ] 00:14:02.369 }' 00:14:02.369 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.370 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.370 "name": "raid_bdev1", 00:14:02.370 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:02.370 "strip_size_kb": 0, 00:14:02.370 "state": "online", 00:14:02.370 "raid_level": "raid1", 00:14:02.370 "superblock": true, 00:14:02.370 "num_base_bdevs": 4, 00:14:02.370 "num_base_bdevs_discovered": 3, 00:14:02.370 "num_base_bdevs_operational": 3, 00:14:02.370 "process": { 00:14:02.370 "type": "rebuild", 00:14:02.370 "target": "spare", 00:14:02.370 "progress": { 00:14:02.370 "blocks": 18432, 00:14:02.370 "percent": 29 00:14:02.370 } 00:14:02.370 }, 00:14:02.370 "base_bdevs_list": [ 00:14:02.370 { 00:14:02.370 "name": "spare", 00:14:02.370 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:02.370 "is_configured": true, 00:14:02.370 "data_offset": 2048, 00:14:02.370 "data_size": 63488 00:14:02.370 }, 00:14:02.370 { 00:14:02.370 "name": null, 00:14:02.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.370 "is_configured": false, 00:14:02.370 "data_offset": 0, 00:14:02.370 "data_size": 63488 00:14:02.370 }, 00:14:02.370 { 00:14:02.370 "name": "BaseBdev3", 00:14:02.370 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:02.370 "is_configured": true, 00:14:02.370 "data_offset": 2048, 00:14:02.370 "data_size": 63488 00:14:02.370 }, 00:14:02.370 { 00:14:02.370 "name": "BaseBdev4", 00:14:02.370 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:02.370 "is_configured": true, 00:14:02.370 "data_offset": 2048, 00:14:02.370 "data_size": 63488 00:14:02.370 } 00:14:02.370 ] 00:14:02.370 }' 00:14:02.370 140.00 IOPS, 420.00 MiB/s [2024-11-18T10:42:28.255Z] 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.630 [2024-11-18 10:42:28.256623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:02.630 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.630 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.630 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.630 10:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.200 [2024-11-18 10:42:28.853789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:03.200 [2024-11-18 10:42:28.854334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:03.460 [2024-11-18 10:42:29.169661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:03.460 126.60 IOPS, 379.80 MiB/s [2024-11-18T10:42:29.345Z] 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.460 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.720 "name": "raid_bdev1", 00:14:03.720 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:03.720 "strip_size_kb": 0, 00:14:03.720 "state": "online", 00:14:03.720 "raid_level": "raid1", 00:14:03.720 "superblock": true, 00:14:03.720 "num_base_bdevs": 4, 00:14:03.720 "num_base_bdevs_discovered": 3, 00:14:03.720 "num_base_bdevs_operational": 3, 00:14:03.720 "process": { 00:14:03.720 "type": "rebuild", 00:14:03.720 "target": "spare", 00:14:03.720 "progress": { 00:14:03.720 "blocks": 32768, 00:14:03.720 "percent": 51 00:14:03.720 } 00:14:03.720 }, 00:14:03.720 "base_bdevs_list": [ 00:14:03.720 { 00:14:03.720 "name": "spare", 00:14:03.720 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:03.720 "is_configured": true, 00:14:03.720 "data_offset": 2048, 00:14:03.720 "data_size": 63488 00:14:03.720 }, 00:14:03.720 { 00:14:03.720 "name": null, 00:14:03.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.720 "is_configured": false, 00:14:03.720 "data_offset": 0, 00:14:03.720 "data_size": 63488 00:14:03.720 }, 00:14:03.720 { 00:14:03.720 "name": "BaseBdev3", 00:14:03.720 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:03.720 "is_configured": true, 00:14:03.720 "data_offset": 2048, 00:14:03.720 "data_size": 63488 00:14:03.720 }, 00:14:03.720 { 00:14:03.720 "name": "BaseBdev4", 00:14:03.720 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:03.720 "is_configured": true, 00:14:03.720 "data_offset": 2048, 00:14:03.720 "data_size": 63488 00:14:03.720 } 00:14:03.720 ] 00:14:03.720 }' 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.720 10:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.980 [2024-11-18 10:42:29.713654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:04.241 [2024-11-18 10:42:30.033887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:04.500 [2024-11-18 10:42:30.140851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:04.760 113.17 IOPS, 339.50 MiB/s [2024-11-18T10:42:30.646Z] [2024-11-18 10:42:30.463785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.761 "name": "raid_bdev1", 00:14:04.761 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:04.761 "strip_size_kb": 0, 00:14:04.761 "state": "online", 00:14:04.761 "raid_level": "raid1", 00:14:04.761 "superblock": true, 00:14:04.761 "num_base_bdevs": 4, 00:14:04.761 "num_base_bdevs_discovered": 3, 00:14:04.761 "num_base_bdevs_operational": 3, 00:14:04.761 "process": { 00:14:04.761 "type": "rebuild", 00:14:04.761 "target": "spare", 00:14:04.761 "progress": { 00:14:04.761 "blocks": 51200, 00:14:04.761 "percent": 80 00:14:04.761 } 00:14:04.761 }, 00:14:04.761 "base_bdevs_list": [ 00:14:04.761 { 00:14:04.761 "name": "spare", 00:14:04.761 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:04.761 "is_configured": true, 00:14:04.761 "data_offset": 2048, 00:14:04.761 "data_size": 63488 00:14:04.761 }, 00:14:04.761 { 00:14:04.761 "name": null, 00:14:04.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.761 "is_configured": false, 00:14:04.761 "data_offset": 0, 00:14:04.761 "data_size": 63488 00:14:04.761 }, 00:14:04.761 { 00:14:04.761 "name": "BaseBdev3", 00:14:04.761 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:04.761 "is_configured": true, 00:14:04.761 "data_offset": 2048, 00:14:04.761 "data_size": 63488 00:14:04.761 }, 00:14:04.761 { 00:14:04.761 "name": "BaseBdev4", 00:14:04.761 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:04.761 "is_configured": true, 00:14:04.761 "data_offset": 2048, 00:14:04.761 "data_size": 63488 00:14:04.761 } 00:14:04.761 ] 00:14:04.761 }' 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.761 10:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.331 [2024-11-18 10:42:30.914163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:05.591 102.57 IOPS, 307.71 MiB/s [2024-11-18T10:42:31.476Z] [2024-11-18 10:42:31.351757] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.591 [2024-11-18 10:42:31.456507] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.591 [2024-11-18 10:42:31.458382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.851 "name": "raid_bdev1", 00:14:05.851 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:05.851 "strip_size_kb": 0, 00:14:05.851 "state": "online", 00:14:05.851 "raid_level": "raid1", 00:14:05.851 "superblock": true, 00:14:05.851 "num_base_bdevs": 4, 00:14:05.851 "num_base_bdevs_discovered": 3, 00:14:05.851 "num_base_bdevs_operational": 3, 00:14:05.851 "base_bdevs_list": [ 00:14:05.851 { 00:14:05.851 "name": "spare", 00:14:05.851 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:05.851 "is_configured": true, 00:14:05.851 "data_offset": 2048, 00:14:05.851 "data_size": 63488 00:14:05.851 }, 00:14:05.851 { 00:14:05.851 "name": null, 00:14:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.851 "is_configured": false, 00:14:05.851 "data_offset": 0, 00:14:05.851 "data_size": 63488 00:14:05.851 }, 00:14:05.851 { 00:14:05.851 "name": "BaseBdev3", 00:14:05.851 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:05.851 "is_configured": true, 00:14:05.851 "data_offset": 2048, 00:14:05.851 "data_size": 63488 00:14:05.851 }, 00:14:05.851 { 00:14:05.851 "name": "BaseBdev4", 00:14:05.851 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:05.851 "is_configured": true, 00:14:05.851 "data_offset": 2048, 00:14:05.851 "data_size": 63488 00:14:05.851 } 00:14:05.851 ] 00:14:05.851 }' 00:14:05.851 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.112 "name": "raid_bdev1", 00:14:06.112 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:06.112 "strip_size_kb": 0, 00:14:06.112 "state": "online", 00:14:06.112 "raid_level": "raid1", 00:14:06.112 "superblock": true, 00:14:06.112 "num_base_bdevs": 4, 00:14:06.112 "num_base_bdevs_discovered": 3, 00:14:06.112 "num_base_bdevs_operational": 3, 00:14:06.112 "base_bdevs_list": [ 00:14:06.112 { 00:14:06.112 "name": "spare", 00:14:06.112 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:06.112 "is_configured": true, 00:14:06.112 "data_offset": 2048, 00:14:06.112 "data_size": 63488 00:14:06.112 }, 00:14:06.112 { 00:14:06.112 "name": null, 00:14:06.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.112 "is_configured": false, 00:14:06.112 "data_offset": 0, 00:14:06.112 "data_size": 63488 00:14:06.112 }, 00:14:06.112 { 00:14:06.112 "name": "BaseBdev3", 00:14:06.112 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:06.112 "is_configured": true, 00:14:06.112 "data_offset": 2048, 00:14:06.112 "data_size": 63488 00:14:06.112 }, 00:14:06.112 { 00:14:06.112 "name": "BaseBdev4", 00:14:06.112 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:06.112 "is_configured": true, 00:14:06.112 "data_offset": 2048, 00:14:06.112 "data_size": 63488 00:14:06.112 } 00:14:06.112 ] 00:14:06.112 }' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.112 "name": "raid_bdev1", 00:14:06.112 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:06.112 "strip_size_kb": 0, 00:14:06.112 "state": "online", 00:14:06.112 "raid_level": "raid1", 00:14:06.112 "superblock": true, 00:14:06.112 "num_base_bdevs": 4, 00:14:06.112 "num_base_bdevs_discovered": 3, 00:14:06.112 "num_base_bdevs_operational": 3, 00:14:06.112 "base_bdevs_list": [ 00:14:06.112 { 00:14:06.112 "name": "spare", 00:14:06.112 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:06.112 "is_configured": true, 00:14:06.112 "data_offset": 2048, 00:14:06.112 "data_size": 63488 00:14:06.112 }, 00:14:06.112 { 00:14:06.112 "name": null, 00:14:06.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.112 "is_configured": false, 00:14:06.112 "data_offset": 0, 00:14:06.112 "data_size": 63488 00:14:06.112 }, 00:14:06.112 { 00:14:06.112 "name": "BaseBdev3", 00:14:06.112 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:06.112 "is_configured": true, 00:14:06.112 "data_offset": 2048, 00:14:06.112 "data_size": 63488 00:14:06.112 }, 00:14:06.112 { 00:14:06.112 "name": "BaseBdev4", 00:14:06.112 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:06.112 "is_configured": true, 00:14:06.112 "data_offset": 2048, 00:14:06.112 "data_size": 63488 00:14:06.112 } 00:14:06.112 ] 00:14:06.112 }' 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.112 10:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 94.38 IOPS, 283.12 MiB/s [2024-11-18T10:42:32.518Z] 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.633 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.633 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.633 [2024-11-18 10:42:32.421028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.633 [2024-11-18 10:42:32.421114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.893 00:14:06.893 Latency(us) 00:14:06.893 [2024-11-18T10:42:32.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.893 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:06.893 raid_bdev1 : 8.32 92.45 277.36 0.00 0.00 14705.37 289.76 115847.04 00:14:06.893 [2024-11-18T10:42:32.778Z] =================================================================================================================== 00:14:06.893 [2024-11-18T10:42:32.778Z] Total : 92.45 277.36 0.00 0.00 14705.37 289.76 115847.04 00:14:06.893 [2024-11-18 10:42:32.541028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.893 [2024-11-18 10:42:32.541107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.893 [2024-11-18 10:42:32.541235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.893 [2024-11-18 10:42:32.541321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.893 { 00:14:06.893 "results": [ 00:14:06.893 { 00:14:06.893 "job": "raid_bdev1", 00:14:06.893 "core_mask": "0x1", 00:14:06.893 "workload": "randrw", 00:14:06.893 "percentage": 50, 00:14:06.893 "status": "finished", 00:14:06.893 "queue_depth": 2, 00:14:06.893 "io_size": 3145728, 00:14:06.893 "runtime": 8.317746, 00:14:06.893 "iops": 92.45293135904848, 00:14:06.893 "mibps": 277.35879407714543, 00:14:06.893 "io_failed": 0, 00:14:06.893 "io_timeout": 0, 00:14:06.893 "avg_latency_us": 14705.36941414302, 00:14:06.893 "min_latency_us": 289.7606986899563, 00:14:06.893 "max_latency_us": 115847.04279475982 00:14:06.893 } 00:14:06.893 ], 00:14:06.893 "core_count": 1 00:14:06.893 } 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.893 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:07.154 /dev/nbd0 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.154 1+0 records in 00:14:07.154 1+0 records out 00:14:07.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557324 s, 7.3 MB/s 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.154 10:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:07.415 /dev/nbd1 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.415 1+0 records in 00:14:07.415 1+0 records out 00:14:07.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475318 s, 8.6 MB/s 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.415 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.675 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:07.934 /dev/nbd1 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.934 1+0 records in 00:14:07.934 1+0 records out 00:14:07.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043037 s, 9.5 MB/s 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.934 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:08.194 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:08.195 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.195 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:08.195 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.195 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.195 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.195 10:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.195 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.455 [2024-11-18 10:42:34.304834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.455 [2024-11-18 10:42:34.304938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.455 [2024-11-18 10:42:34.305003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:08.455 [2024-11-18 10:42:34.305036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.455 [2024-11-18 10:42:34.307161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.455 [2024-11-18 10:42:34.307247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.455 [2024-11-18 10:42:34.307360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.455 [2024-11-18 10:42:34.307434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.455 [2024-11-18 10:42:34.307605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.455 [2024-11-18 10:42:34.307744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.455 spare 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.455 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.716 [2024-11-18 10:42:34.407668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:08.716 [2024-11-18 10:42:34.407732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:08.716 [2024-11-18 10:42:34.407993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:08.716 [2024-11-18 10:42:34.408194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:08.716 [2024-11-18 10:42:34.408237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:08.716 [2024-11-18 10:42:34.408418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.716 "name": "raid_bdev1", 00:14:08.716 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:08.716 "strip_size_kb": 0, 00:14:08.716 "state": "online", 00:14:08.716 "raid_level": "raid1", 00:14:08.716 "superblock": true, 00:14:08.716 "num_base_bdevs": 4, 00:14:08.716 "num_base_bdevs_discovered": 3, 00:14:08.716 "num_base_bdevs_operational": 3, 00:14:08.716 "base_bdevs_list": [ 00:14:08.716 { 00:14:08.716 "name": "spare", 00:14:08.716 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:08.716 "is_configured": true, 00:14:08.716 "data_offset": 2048, 00:14:08.716 "data_size": 63488 00:14:08.716 }, 00:14:08.716 { 00:14:08.716 "name": null, 00:14:08.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.716 "is_configured": false, 00:14:08.716 "data_offset": 2048, 00:14:08.716 "data_size": 63488 00:14:08.716 }, 00:14:08.716 { 00:14:08.716 "name": "BaseBdev3", 00:14:08.716 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:08.716 "is_configured": true, 00:14:08.716 "data_offset": 2048, 00:14:08.716 "data_size": 63488 00:14:08.716 }, 00:14:08.716 { 00:14:08.716 "name": "BaseBdev4", 00:14:08.716 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:08.716 "is_configured": true, 00:14:08.716 "data_offset": 2048, 00:14:08.716 "data_size": 63488 00:14:08.716 } 00:14:08.716 ] 00:14:08.716 }' 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.716 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.303 "name": "raid_bdev1", 00:14:09.303 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:09.303 "strip_size_kb": 0, 00:14:09.303 "state": "online", 00:14:09.303 "raid_level": "raid1", 00:14:09.303 "superblock": true, 00:14:09.303 "num_base_bdevs": 4, 00:14:09.303 "num_base_bdevs_discovered": 3, 00:14:09.303 "num_base_bdevs_operational": 3, 00:14:09.303 "base_bdevs_list": [ 00:14:09.303 { 00:14:09.303 "name": "spare", 00:14:09.303 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:09.303 "is_configured": true, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 }, 00:14:09.303 { 00:14:09.303 "name": null, 00:14:09.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.303 "is_configured": false, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 }, 00:14:09.303 { 00:14:09.303 "name": "BaseBdev3", 00:14:09.303 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:09.303 "is_configured": true, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 }, 00:14:09.303 { 00:14:09.303 "name": "BaseBdev4", 00:14:09.303 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:09.303 "is_configured": true, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 } 00:14:09.303 ] 00:14:09.303 }' 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.303 10:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.303 [2024-11-18 10:42:35.059702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.303 "name": "raid_bdev1", 00:14:09.303 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:09.303 "strip_size_kb": 0, 00:14:09.303 "state": "online", 00:14:09.303 "raid_level": "raid1", 00:14:09.303 "superblock": true, 00:14:09.303 "num_base_bdevs": 4, 00:14:09.303 "num_base_bdevs_discovered": 2, 00:14:09.303 "num_base_bdevs_operational": 2, 00:14:09.303 "base_bdevs_list": [ 00:14:09.303 { 00:14:09.303 "name": null, 00:14:09.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.303 "is_configured": false, 00:14:09.303 "data_offset": 0, 00:14:09.303 "data_size": 63488 00:14:09.303 }, 00:14:09.303 { 00:14:09.303 "name": null, 00:14:09.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.303 "is_configured": false, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 }, 00:14:09.303 { 00:14:09.303 "name": "BaseBdev3", 00:14:09.303 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:09.303 "is_configured": true, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 }, 00:14:09.303 { 00:14:09.303 "name": "BaseBdev4", 00:14:09.303 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:09.303 "is_configured": true, 00:14:09.303 "data_offset": 2048, 00:14:09.303 "data_size": 63488 00:14:09.303 } 00:14:09.303 ] 00:14:09.303 }' 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.303 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.872 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.872 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.872 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.872 [2024-11-18 10:42:35.543262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.872 [2024-11-18 10:42:35.543449] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:09.872 [2024-11-18 10:42:35.543522] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:09.872 [2024-11-18 10:42:35.543578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.872 [2024-11-18 10:42:35.558043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:09.872 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.872 10:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:09.872 [2024-11-18 10:42:35.559867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.813 "name": "raid_bdev1", 00:14:10.813 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:10.813 "strip_size_kb": 0, 00:14:10.813 "state": "online", 00:14:10.813 "raid_level": "raid1", 00:14:10.813 "superblock": true, 00:14:10.813 "num_base_bdevs": 4, 00:14:10.813 "num_base_bdevs_discovered": 3, 00:14:10.813 "num_base_bdevs_operational": 3, 00:14:10.813 "process": { 00:14:10.813 "type": "rebuild", 00:14:10.813 "target": "spare", 00:14:10.813 "progress": { 00:14:10.813 "blocks": 20480, 00:14:10.813 "percent": 32 00:14:10.813 } 00:14:10.813 }, 00:14:10.813 "base_bdevs_list": [ 00:14:10.813 { 00:14:10.813 "name": "spare", 00:14:10.813 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:10.813 "is_configured": true, 00:14:10.813 "data_offset": 2048, 00:14:10.813 "data_size": 63488 00:14:10.813 }, 00:14:10.813 { 00:14:10.813 "name": null, 00:14:10.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.813 "is_configured": false, 00:14:10.813 "data_offset": 2048, 00:14:10.813 "data_size": 63488 00:14:10.813 }, 00:14:10.813 { 00:14:10.813 "name": "BaseBdev3", 00:14:10.813 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:10.813 "is_configured": true, 00:14:10.813 "data_offset": 2048, 00:14:10.813 "data_size": 63488 00:14:10.813 }, 00:14:10.813 { 00:14:10.813 "name": "BaseBdev4", 00:14:10.813 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:10.813 "is_configured": true, 00:14:10.813 "data_offset": 2048, 00:14:10.813 "data_size": 63488 00:14:10.813 } 00:14:10.813 ] 00:14:10.813 }' 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.813 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.074 [2024-11-18 10:42:36.727644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.074 [2024-11-18 10:42:36.764544] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.074 [2024-11-18 10:42:36.764605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.074 [2024-11-18 10:42:36.764621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.074 [2024-11-18 10:42:36.764629] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.074 "name": "raid_bdev1", 00:14:11.074 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:11.074 "strip_size_kb": 0, 00:14:11.074 "state": "online", 00:14:11.074 "raid_level": "raid1", 00:14:11.074 "superblock": true, 00:14:11.074 "num_base_bdevs": 4, 00:14:11.074 "num_base_bdevs_discovered": 2, 00:14:11.074 "num_base_bdevs_operational": 2, 00:14:11.074 "base_bdevs_list": [ 00:14:11.074 { 00:14:11.074 "name": null, 00:14:11.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.074 "is_configured": false, 00:14:11.074 "data_offset": 0, 00:14:11.074 "data_size": 63488 00:14:11.074 }, 00:14:11.074 { 00:14:11.074 "name": null, 00:14:11.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.074 "is_configured": false, 00:14:11.074 "data_offset": 2048, 00:14:11.074 "data_size": 63488 00:14:11.074 }, 00:14:11.074 { 00:14:11.074 "name": "BaseBdev3", 00:14:11.074 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:11.074 "is_configured": true, 00:14:11.074 "data_offset": 2048, 00:14:11.074 "data_size": 63488 00:14:11.074 }, 00:14:11.074 { 00:14:11.074 "name": "BaseBdev4", 00:14:11.074 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:11.074 "is_configured": true, 00:14:11.074 "data_offset": 2048, 00:14:11.074 "data_size": 63488 00:14:11.074 } 00:14:11.074 ] 00:14:11.074 }' 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.074 10:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.645 10:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.645 10:42:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.645 10:42:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.645 [2024-11-18 10:42:37.303142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.645 [2024-11-18 10:42:37.303258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.645 [2024-11-18 10:42:37.303301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:11.645 [2024-11-18 10:42:37.303336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.645 [2024-11-18 10:42:37.303795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.645 [2024-11-18 10:42:37.303864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.645 [2024-11-18 10:42:37.303976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:11.645 [2024-11-18 10:42:37.304017] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:11.645 [2024-11-18 10:42:37.304056] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:11.645 [2024-11-18 10:42:37.304127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.645 [2024-11-18 10:42:37.317907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:11.645 spare 00:14:11.645 10:42:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.645 10:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:11.645 [2024-11-18 10:42:37.319751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.585 "name": "raid_bdev1", 00:14:12.585 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:12.585 "strip_size_kb": 0, 00:14:12.585 "state": "online", 00:14:12.585 "raid_level": "raid1", 00:14:12.585 "superblock": true, 00:14:12.585 "num_base_bdevs": 4, 00:14:12.585 "num_base_bdevs_discovered": 3, 00:14:12.585 "num_base_bdevs_operational": 3, 00:14:12.585 "process": { 00:14:12.585 "type": "rebuild", 00:14:12.585 "target": "spare", 00:14:12.585 "progress": { 00:14:12.585 "blocks": 20480, 00:14:12.585 "percent": 32 00:14:12.585 } 00:14:12.585 }, 00:14:12.585 "base_bdevs_list": [ 00:14:12.585 { 00:14:12.585 "name": "spare", 00:14:12.585 "uuid": "1a0e7d8e-716a-5016-9c98-0f06c416149c", 00:14:12.585 "is_configured": true, 00:14:12.585 "data_offset": 2048, 00:14:12.585 "data_size": 63488 00:14:12.585 }, 00:14:12.585 { 00:14:12.585 "name": null, 00:14:12.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.585 "is_configured": false, 00:14:12.585 "data_offset": 2048, 00:14:12.585 "data_size": 63488 00:14:12.585 }, 00:14:12.585 { 00:14:12.585 "name": "BaseBdev3", 00:14:12.585 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:12.585 "is_configured": true, 00:14:12.585 "data_offset": 2048, 00:14:12.585 "data_size": 63488 00:14:12.585 }, 00:14:12.585 { 00:14:12.585 "name": "BaseBdev4", 00:14:12.585 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:12.585 "is_configured": true, 00:14:12.585 "data_offset": 2048, 00:14:12.585 "data_size": 63488 00:14:12.585 } 00:14:12.585 ] 00:14:12.585 }' 00:14:12.585 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.586 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.586 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.846 [2024-11-18 10:42:38.483959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.846 [2024-11-18 10:42:38.524330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.846 [2024-11-18 10:42:38.524418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.846 [2024-11-18 10:42:38.524438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.846 [2024-11-18 10:42:38.524445] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.846 "name": "raid_bdev1", 00:14:12.846 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:12.846 "strip_size_kb": 0, 00:14:12.846 "state": "online", 00:14:12.846 "raid_level": "raid1", 00:14:12.846 "superblock": true, 00:14:12.846 "num_base_bdevs": 4, 00:14:12.846 "num_base_bdevs_discovered": 2, 00:14:12.846 "num_base_bdevs_operational": 2, 00:14:12.846 "base_bdevs_list": [ 00:14:12.846 { 00:14:12.846 "name": null, 00:14:12.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.846 "is_configured": false, 00:14:12.846 "data_offset": 0, 00:14:12.846 "data_size": 63488 00:14:12.846 }, 00:14:12.846 { 00:14:12.846 "name": null, 00:14:12.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.846 "is_configured": false, 00:14:12.846 "data_offset": 2048, 00:14:12.846 "data_size": 63488 00:14:12.846 }, 00:14:12.846 { 00:14:12.846 "name": "BaseBdev3", 00:14:12.846 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:12.846 "is_configured": true, 00:14:12.846 "data_offset": 2048, 00:14:12.846 "data_size": 63488 00:14:12.846 }, 00:14:12.846 { 00:14:12.846 "name": "BaseBdev4", 00:14:12.846 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:12.846 "is_configured": true, 00:14:12.846 "data_offset": 2048, 00:14:12.846 "data_size": 63488 00:14:12.846 } 00:14:12.846 ] 00:14:12.846 }' 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.846 10:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.427 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.427 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.427 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.427 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.428 "name": "raid_bdev1", 00:14:13.428 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:13.428 "strip_size_kb": 0, 00:14:13.428 "state": "online", 00:14:13.428 "raid_level": "raid1", 00:14:13.428 "superblock": true, 00:14:13.428 "num_base_bdevs": 4, 00:14:13.428 "num_base_bdevs_discovered": 2, 00:14:13.428 "num_base_bdevs_operational": 2, 00:14:13.428 "base_bdevs_list": [ 00:14:13.428 { 00:14:13.428 "name": null, 00:14:13.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.428 "is_configured": false, 00:14:13.428 "data_offset": 0, 00:14:13.428 "data_size": 63488 00:14:13.428 }, 00:14:13.428 { 00:14:13.428 "name": null, 00:14:13.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.428 "is_configured": false, 00:14:13.428 "data_offset": 2048, 00:14:13.428 "data_size": 63488 00:14:13.428 }, 00:14:13.428 { 00:14:13.428 "name": "BaseBdev3", 00:14:13.428 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:13.428 "is_configured": true, 00:14:13.428 "data_offset": 2048, 00:14:13.428 "data_size": 63488 00:14:13.428 }, 00:14:13.428 { 00:14:13.428 "name": "BaseBdev4", 00:14:13.428 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:13.428 "is_configured": true, 00:14:13.428 "data_offset": 2048, 00:14:13.428 "data_size": 63488 00:14:13.428 } 00:14:13.428 ] 00:14:13.428 }' 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.428 [2024-11-18 10:42:39.175093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.428 [2024-11-18 10:42:39.175143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.428 [2024-11-18 10:42:39.175179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:13.428 [2024-11-18 10:42:39.175200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.428 [2024-11-18 10:42:39.175605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.428 [2024-11-18 10:42:39.175628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.428 [2024-11-18 10:42:39.175704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:13.428 [2024-11-18 10:42:39.175716] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:13.428 [2024-11-18 10:42:39.175728] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.428 [2024-11-18 10:42:39.175740] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:13.428 BaseBdev1 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.428 10:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.369 "name": "raid_bdev1", 00:14:14.369 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:14.369 "strip_size_kb": 0, 00:14:14.369 "state": "online", 00:14:14.369 "raid_level": "raid1", 00:14:14.369 "superblock": true, 00:14:14.369 "num_base_bdevs": 4, 00:14:14.369 "num_base_bdevs_discovered": 2, 00:14:14.369 "num_base_bdevs_operational": 2, 00:14:14.369 "base_bdevs_list": [ 00:14:14.369 { 00:14:14.369 "name": null, 00:14:14.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.369 "is_configured": false, 00:14:14.369 "data_offset": 0, 00:14:14.369 "data_size": 63488 00:14:14.369 }, 00:14:14.369 { 00:14:14.369 "name": null, 00:14:14.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.369 "is_configured": false, 00:14:14.369 "data_offset": 2048, 00:14:14.369 "data_size": 63488 00:14:14.369 }, 00:14:14.369 { 00:14:14.369 "name": "BaseBdev3", 00:14:14.369 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:14.369 "is_configured": true, 00:14:14.369 "data_offset": 2048, 00:14:14.369 "data_size": 63488 00:14:14.369 }, 00:14:14.369 { 00:14:14.369 "name": "BaseBdev4", 00:14:14.369 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:14.369 "is_configured": true, 00:14:14.369 "data_offset": 2048, 00:14:14.369 "data_size": 63488 00:14:14.369 } 00:14:14.369 ] 00:14:14.369 }' 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.369 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.938 "name": "raid_bdev1", 00:14:14.938 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:14.938 "strip_size_kb": 0, 00:14:14.938 "state": "online", 00:14:14.938 "raid_level": "raid1", 00:14:14.938 "superblock": true, 00:14:14.938 "num_base_bdevs": 4, 00:14:14.938 "num_base_bdevs_discovered": 2, 00:14:14.938 "num_base_bdevs_operational": 2, 00:14:14.938 "base_bdevs_list": [ 00:14:14.938 { 00:14:14.938 "name": null, 00:14:14.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.938 "is_configured": false, 00:14:14.938 "data_offset": 0, 00:14:14.938 "data_size": 63488 00:14:14.938 }, 00:14:14.938 { 00:14:14.938 "name": null, 00:14:14.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.938 "is_configured": false, 00:14:14.938 "data_offset": 2048, 00:14:14.938 "data_size": 63488 00:14:14.938 }, 00:14:14.938 { 00:14:14.938 "name": "BaseBdev3", 00:14:14.938 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:14.938 "is_configured": true, 00:14:14.938 "data_offset": 2048, 00:14:14.938 "data_size": 63488 00:14:14.938 }, 00:14:14.938 { 00:14:14.938 "name": "BaseBdev4", 00:14:14.938 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:14.938 "is_configured": true, 00:14:14.938 "data_offset": 2048, 00:14:14.938 "data_size": 63488 00:14:14.938 } 00:14:14.938 ] 00:14:14.938 }' 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.938 [2024-11-18 10:42:40.788718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.938 [2024-11-18 10:42:40.788889] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:14.938 [2024-11-18 10:42:40.788919] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:14.938 request: 00:14:14.938 { 00:14:14.938 "base_bdev": "BaseBdev1", 00:14:14.938 "raid_bdev": "raid_bdev1", 00:14:14.938 "method": "bdev_raid_add_base_bdev", 00:14:14.938 "req_id": 1 00:14:14.938 } 00:14:14.938 Got JSON-RPC error response 00:14:14.938 response: 00:14:14.938 { 00:14:14.938 "code": -22, 00:14:14.938 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:14.938 } 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:14.938 10:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.322 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.323 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.323 "name": "raid_bdev1", 00:14:16.323 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:16.323 "strip_size_kb": 0, 00:14:16.323 "state": "online", 00:14:16.323 "raid_level": "raid1", 00:14:16.323 "superblock": true, 00:14:16.323 "num_base_bdevs": 4, 00:14:16.323 "num_base_bdevs_discovered": 2, 00:14:16.323 "num_base_bdevs_operational": 2, 00:14:16.323 "base_bdevs_list": [ 00:14:16.323 { 00:14:16.323 "name": null, 00:14:16.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.323 "is_configured": false, 00:14:16.323 "data_offset": 0, 00:14:16.323 "data_size": 63488 00:14:16.323 }, 00:14:16.323 { 00:14:16.323 "name": null, 00:14:16.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.323 "is_configured": false, 00:14:16.323 "data_offset": 2048, 00:14:16.323 "data_size": 63488 00:14:16.323 }, 00:14:16.323 { 00:14:16.323 "name": "BaseBdev3", 00:14:16.323 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:16.323 "is_configured": true, 00:14:16.323 "data_offset": 2048, 00:14:16.323 "data_size": 63488 00:14:16.323 }, 00:14:16.323 { 00:14:16.323 "name": "BaseBdev4", 00:14:16.323 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:16.323 "is_configured": true, 00:14:16.323 "data_offset": 2048, 00:14:16.323 "data_size": 63488 00:14:16.323 } 00:14:16.323 ] 00:14:16.323 }' 00:14:16.323 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.323 10:42:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.584 "name": "raid_bdev1", 00:14:16.584 "uuid": "df0e49c8-7476-456a-bb5b-7926dc7a12ce", 00:14:16.584 "strip_size_kb": 0, 00:14:16.584 "state": "online", 00:14:16.584 "raid_level": "raid1", 00:14:16.584 "superblock": true, 00:14:16.584 "num_base_bdevs": 4, 00:14:16.584 "num_base_bdevs_discovered": 2, 00:14:16.584 "num_base_bdevs_operational": 2, 00:14:16.584 "base_bdevs_list": [ 00:14:16.584 { 00:14:16.584 "name": null, 00:14:16.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.584 "is_configured": false, 00:14:16.584 "data_offset": 0, 00:14:16.584 "data_size": 63488 00:14:16.584 }, 00:14:16.584 { 00:14:16.584 "name": null, 00:14:16.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.584 "is_configured": false, 00:14:16.584 "data_offset": 2048, 00:14:16.584 "data_size": 63488 00:14:16.584 }, 00:14:16.584 { 00:14:16.584 "name": "BaseBdev3", 00:14:16.584 "uuid": "4784efb9-4875-5014-b710-f96720692e5b", 00:14:16.584 "is_configured": true, 00:14:16.584 "data_offset": 2048, 00:14:16.584 "data_size": 63488 00:14:16.584 }, 00:14:16.584 { 00:14:16.584 "name": "BaseBdev4", 00:14:16.584 "uuid": "550ce551-44c0-55b2-ae89-56e4c3f5b670", 00:14:16.584 "is_configured": true, 00:14:16.584 "data_offset": 2048, 00:14:16.584 "data_size": 63488 00:14:16.584 } 00:14:16.584 ] 00:14:16.584 }' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78959 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78959 ']' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78959 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78959 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78959' 00:14:16.584 killing process with pid 78959 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78959 00:14:16.584 Received shutdown signal, test time was about 18.201500 seconds 00:14:16.584 00:14:16.584 Latency(us) 00:14:16.584 [2024-11-18T10:42:42.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.584 [2024-11-18T10:42:42.469Z] =================================================================================================================== 00:14:16.584 [2024-11-18T10:42:42.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.584 [2024-11-18 10:42:42.385518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.584 [2024-11-18 10:42:42.385626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.584 [2024-11-18 10:42:42.385682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.584 [2024-11-18 10:42:42.385693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:16.584 10:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78959 00:14:17.155 [2024-11-18 10:42:42.776600] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.098 10:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.098 00:14:18.098 real 0m21.507s 00:14:18.098 user 0m28.210s 00:14:18.098 sys 0m2.716s 00:14:18.098 10:42:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.098 10:42:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 ************************************ 00:14:18.098 END TEST raid_rebuild_test_sb_io 00:14:18.098 ************************************ 00:14:18.098 10:42:43 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:18.098 10:42:43 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:18.098 10:42:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:18.098 10:42:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.098 10:42:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 ************************************ 00:14:18.099 START TEST raid5f_state_function_test 00:14:18.099 ************************************ 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79685 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79685' 00:14:18.099 Process raid pid: 79685 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79685 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79685 ']' 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.099 10:42:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.360 [2024-11-18 10:42:44.044611] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:18.360 [2024-11-18 10:42:44.044727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.360 [2024-11-18 10:42:44.225799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.620 [2024-11-18 10:42:44.333141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.881 [2024-11-18 10:42:44.534083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.881 [2024-11-18 10:42:44.534117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.141 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.141 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:19.141 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.141 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.141 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.141 [2024-11-18 10:42:44.855056] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.141 [2024-11-18 10:42:44.855114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.141 [2024-11-18 10:42:44.855124] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.142 [2024-11-18 10:42:44.855150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.142 [2024-11-18 10:42:44.855156] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.142 [2024-11-18 10:42:44.855164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.142 "name": "Existed_Raid", 00:14:19.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.142 "strip_size_kb": 64, 00:14:19.142 "state": "configuring", 00:14:19.142 "raid_level": "raid5f", 00:14:19.142 "superblock": false, 00:14:19.142 "num_base_bdevs": 3, 00:14:19.142 "num_base_bdevs_discovered": 0, 00:14:19.142 "num_base_bdevs_operational": 3, 00:14:19.142 "base_bdevs_list": [ 00:14:19.142 { 00:14:19.142 "name": "BaseBdev1", 00:14:19.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.142 "is_configured": false, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 0 00:14:19.142 }, 00:14:19.142 { 00:14:19.142 "name": "BaseBdev2", 00:14:19.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.142 "is_configured": false, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 0 00:14:19.142 }, 00:14:19.142 { 00:14:19.142 "name": "BaseBdev3", 00:14:19.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.142 "is_configured": false, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 0 00:14:19.142 } 00:14:19.142 ] 00:14:19.142 }' 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.142 10:42:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.411 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.411 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.411 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.411 [2024-11-18 10:42:45.290275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.411 [2024-11-18 10:42:45.290359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.676 [2024-11-18 10:42:45.302265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.676 [2024-11-18 10:42:45.302350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.676 [2024-11-18 10:42:45.302377] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.676 [2024-11-18 10:42:45.302397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.676 [2024-11-18 10:42:45.302414] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.676 [2024-11-18 10:42:45.302433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.676 [2024-11-18 10:42:45.350274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.676 BaseBdev1 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.676 [ 00:14:19.676 { 00:14:19.676 "name": "BaseBdev1", 00:14:19.676 "aliases": [ 00:14:19.676 "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491" 00:14:19.676 ], 00:14:19.676 "product_name": "Malloc disk", 00:14:19.676 "block_size": 512, 00:14:19.676 "num_blocks": 65536, 00:14:19.676 "uuid": "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491", 00:14:19.676 "assigned_rate_limits": { 00:14:19.676 "rw_ios_per_sec": 0, 00:14:19.676 "rw_mbytes_per_sec": 0, 00:14:19.676 "r_mbytes_per_sec": 0, 00:14:19.676 "w_mbytes_per_sec": 0 00:14:19.676 }, 00:14:19.676 "claimed": true, 00:14:19.676 "claim_type": "exclusive_write", 00:14:19.676 "zoned": false, 00:14:19.676 "supported_io_types": { 00:14:19.676 "read": true, 00:14:19.676 "write": true, 00:14:19.676 "unmap": true, 00:14:19.676 "flush": true, 00:14:19.676 "reset": true, 00:14:19.676 "nvme_admin": false, 00:14:19.676 "nvme_io": false, 00:14:19.676 "nvme_io_md": false, 00:14:19.676 "write_zeroes": true, 00:14:19.676 "zcopy": true, 00:14:19.676 "get_zone_info": false, 00:14:19.676 "zone_management": false, 00:14:19.676 "zone_append": false, 00:14:19.676 "compare": false, 00:14:19.676 "compare_and_write": false, 00:14:19.676 "abort": true, 00:14:19.676 "seek_hole": false, 00:14:19.676 "seek_data": false, 00:14:19.676 "copy": true, 00:14:19.676 "nvme_iov_md": false 00:14:19.676 }, 00:14:19.676 "memory_domains": [ 00:14:19.676 { 00:14:19.676 "dma_device_id": "system", 00:14:19.676 "dma_device_type": 1 00:14:19.676 }, 00:14:19.676 { 00:14:19.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.676 "dma_device_type": 2 00:14:19.676 } 00:14:19.676 ], 00:14:19.676 "driver_specific": {} 00:14:19.676 } 00:14:19.676 ] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.676 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.677 "name": "Existed_Raid", 00:14:19.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.677 "strip_size_kb": 64, 00:14:19.677 "state": "configuring", 00:14:19.677 "raid_level": "raid5f", 00:14:19.677 "superblock": false, 00:14:19.677 "num_base_bdevs": 3, 00:14:19.677 "num_base_bdevs_discovered": 1, 00:14:19.677 "num_base_bdevs_operational": 3, 00:14:19.677 "base_bdevs_list": [ 00:14:19.677 { 00:14:19.677 "name": "BaseBdev1", 00:14:19.677 "uuid": "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491", 00:14:19.677 "is_configured": true, 00:14:19.677 "data_offset": 0, 00:14:19.677 "data_size": 65536 00:14:19.677 }, 00:14:19.677 { 00:14:19.677 "name": "BaseBdev2", 00:14:19.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.677 "is_configured": false, 00:14:19.677 "data_offset": 0, 00:14:19.677 "data_size": 0 00:14:19.677 }, 00:14:19.677 { 00:14:19.677 "name": "BaseBdev3", 00:14:19.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.677 "is_configured": false, 00:14:19.677 "data_offset": 0, 00:14:19.677 "data_size": 0 00:14:19.677 } 00:14:19.677 ] 00:14:19.677 }' 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.677 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.937 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 [2024-11-18 10:42:45.785746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.938 [2024-11-18 10:42:45.785832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 [2024-11-18 10:42:45.797774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.938 [2024-11-18 10:42:45.799394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.938 [2024-11-18 10:42:45.799437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.938 [2024-11-18 10:42:45.799446] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.938 [2024-11-18 10:42:45.799455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.198 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.198 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.198 "name": "Existed_Raid", 00:14:20.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.199 "strip_size_kb": 64, 00:14:20.199 "state": "configuring", 00:14:20.199 "raid_level": "raid5f", 00:14:20.199 "superblock": false, 00:14:20.199 "num_base_bdevs": 3, 00:14:20.199 "num_base_bdevs_discovered": 1, 00:14:20.199 "num_base_bdevs_operational": 3, 00:14:20.199 "base_bdevs_list": [ 00:14:20.199 { 00:14:20.199 "name": "BaseBdev1", 00:14:20.199 "uuid": "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491", 00:14:20.199 "is_configured": true, 00:14:20.199 "data_offset": 0, 00:14:20.199 "data_size": 65536 00:14:20.199 }, 00:14:20.199 { 00:14:20.199 "name": "BaseBdev2", 00:14:20.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.199 "is_configured": false, 00:14:20.199 "data_offset": 0, 00:14:20.199 "data_size": 0 00:14:20.199 }, 00:14:20.199 { 00:14:20.199 "name": "BaseBdev3", 00:14:20.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.199 "is_configured": false, 00:14:20.199 "data_offset": 0, 00:14:20.199 "data_size": 0 00:14:20.199 } 00:14:20.199 ] 00:14:20.199 }' 00:14:20.199 10:42:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.199 10:42:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.460 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.461 [2024-11-18 10:42:46.303272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.461 BaseBdev2 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.461 [ 00:14:20.461 { 00:14:20.461 "name": "BaseBdev2", 00:14:20.461 "aliases": [ 00:14:20.461 "10643bc2-0780-413a-a767-ad202dfd3d45" 00:14:20.461 ], 00:14:20.461 "product_name": "Malloc disk", 00:14:20.461 "block_size": 512, 00:14:20.461 "num_blocks": 65536, 00:14:20.461 "uuid": "10643bc2-0780-413a-a767-ad202dfd3d45", 00:14:20.461 "assigned_rate_limits": { 00:14:20.461 "rw_ios_per_sec": 0, 00:14:20.461 "rw_mbytes_per_sec": 0, 00:14:20.461 "r_mbytes_per_sec": 0, 00:14:20.461 "w_mbytes_per_sec": 0 00:14:20.461 }, 00:14:20.461 "claimed": true, 00:14:20.461 "claim_type": "exclusive_write", 00:14:20.461 "zoned": false, 00:14:20.461 "supported_io_types": { 00:14:20.461 "read": true, 00:14:20.461 "write": true, 00:14:20.461 "unmap": true, 00:14:20.461 "flush": true, 00:14:20.461 "reset": true, 00:14:20.461 "nvme_admin": false, 00:14:20.461 "nvme_io": false, 00:14:20.461 "nvme_io_md": false, 00:14:20.461 "write_zeroes": true, 00:14:20.461 "zcopy": true, 00:14:20.461 "get_zone_info": false, 00:14:20.461 "zone_management": false, 00:14:20.461 "zone_append": false, 00:14:20.461 "compare": false, 00:14:20.461 "compare_and_write": false, 00:14:20.461 "abort": true, 00:14:20.461 "seek_hole": false, 00:14:20.461 "seek_data": false, 00:14:20.461 "copy": true, 00:14:20.461 "nvme_iov_md": false 00:14:20.461 }, 00:14:20.461 "memory_domains": [ 00:14:20.461 { 00:14:20.461 "dma_device_id": "system", 00:14:20.461 "dma_device_type": 1 00:14:20.461 }, 00:14:20.461 { 00:14:20.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.461 "dma_device_type": 2 00:14:20.461 } 00:14:20.461 ], 00:14:20.461 "driver_specific": {} 00:14:20.461 } 00:14:20.461 ] 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.461 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.722 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.722 "name": "Existed_Raid", 00:14:20.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.722 "strip_size_kb": 64, 00:14:20.722 "state": "configuring", 00:14:20.722 "raid_level": "raid5f", 00:14:20.722 "superblock": false, 00:14:20.722 "num_base_bdevs": 3, 00:14:20.722 "num_base_bdevs_discovered": 2, 00:14:20.722 "num_base_bdevs_operational": 3, 00:14:20.722 "base_bdevs_list": [ 00:14:20.722 { 00:14:20.722 "name": "BaseBdev1", 00:14:20.722 "uuid": "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491", 00:14:20.722 "is_configured": true, 00:14:20.722 "data_offset": 0, 00:14:20.722 "data_size": 65536 00:14:20.722 }, 00:14:20.722 { 00:14:20.722 "name": "BaseBdev2", 00:14:20.722 "uuid": "10643bc2-0780-413a-a767-ad202dfd3d45", 00:14:20.723 "is_configured": true, 00:14:20.723 "data_offset": 0, 00:14:20.723 "data_size": 65536 00:14:20.723 }, 00:14:20.723 { 00:14:20.723 "name": "BaseBdev3", 00:14:20.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.723 "is_configured": false, 00:14:20.723 "data_offset": 0, 00:14:20.723 "data_size": 0 00:14:20.723 } 00:14:20.723 ] 00:14:20.723 }' 00:14:20.723 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.723 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.983 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.983 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.983 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.243 [2024-11-18 10:42:46.894003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.243 [2024-11-18 10:42:46.894084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.243 [2024-11-18 10:42:46.894098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:21.243 [2024-11-18 10:42:46.894368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.243 [2024-11-18 10:42:46.899513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.243 [2024-11-18 10:42:46.899602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:21.243 [2024-11-18 10:42:46.899860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.243 BaseBdev3 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.243 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.243 [ 00:14:21.243 { 00:14:21.243 "name": "BaseBdev3", 00:14:21.243 "aliases": [ 00:14:21.243 "5234ef69-d8e1-4136-94ed-5cfd0993c397" 00:14:21.243 ], 00:14:21.243 "product_name": "Malloc disk", 00:14:21.243 "block_size": 512, 00:14:21.243 "num_blocks": 65536, 00:14:21.243 "uuid": "5234ef69-d8e1-4136-94ed-5cfd0993c397", 00:14:21.243 "assigned_rate_limits": { 00:14:21.243 "rw_ios_per_sec": 0, 00:14:21.243 "rw_mbytes_per_sec": 0, 00:14:21.243 "r_mbytes_per_sec": 0, 00:14:21.243 "w_mbytes_per_sec": 0 00:14:21.243 }, 00:14:21.243 "claimed": true, 00:14:21.243 "claim_type": "exclusive_write", 00:14:21.243 "zoned": false, 00:14:21.243 "supported_io_types": { 00:14:21.243 "read": true, 00:14:21.243 "write": true, 00:14:21.243 "unmap": true, 00:14:21.243 "flush": true, 00:14:21.243 "reset": true, 00:14:21.243 "nvme_admin": false, 00:14:21.243 "nvme_io": false, 00:14:21.243 "nvme_io_md": false, 00:14:21.243 "write_zeroes": true, 00:14:21.243 "zcopy": true, 00:14:21.243 "get_zone_info": false, 00:14:21.243 "zone_management": false, 00:14:21.243 "zone_append": false, 00:14:21.243 "compare": false, 00:14:21.243 "compare_and_write": false, 00:14:21.243 "abort": true, 00:14:21.243 "seek_hole": false, 00:14:21.243 "seek_data": false, 00:14:21.243 "copy": true, 00:14:21.243 "nvme_iov_md": false 00:14:21.243 }, 00:14:21.243 "memory_domains": [ 00:14:21.243 { 00:14:21.244 "dma_device_id": "system", 00:14:21.244 "dma_device_type": 1 00:14:21.244 }, 00:14:21.244 { 00:14:21.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.244 "dma_device_type": 2 00:14:21.244 } 00:14:21.244 ], 00:14:21.244 "driver_specific": {} 00:14:21.244 } 00:14:21.244 ] 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.244 "name": "Existed_Raid", 00:14:21.244 "uuid": "3b8686aa-4aa5-4ae7-8165-9e619b401e6c", 00:14:21.244 "strip_size_kb": 64, 00:14:21.244 "state": "online", 00:14:21.244 "raid_level": "raid5f", 00:14:21.244 "superblock": false, 00:14:21.244 "num_base_bdevs": 3, 00:14:21.244 "num_base_bdevs_discovered": 3, 00:14:21.244 "num_base_bdevs_operational": 3, 00:14:21.244 "base_bdevs_list": [ 00:14:21.244 { 00:14:21.244 "name": "BaseBdev1", 00:14:21.244 "uuid": "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491", 00:14:21.244 "is_configured": true, 00:14:21.244 "data_offset": 0, 00:14:21.244 "data_size": 65536 00:14:21.244 }, 00:14:21.244 { 00:14:21.244 "name": "BaseBdev2", 00:14:21.244 "uuid": "10643bc2-0780-413a-a767-ad202dfd3d45", 00:14:21.244 "is_configured": true, 00:14:21.244 "data_offset": 0, 00:14:21.244 "data_size": 65536 00:14:21.244 }, 00:14:21.244 { 00:14:21.244 "name": "BaseBdev3", 00:14:21.244 "uuid": "5234ef69-d8e1-4136-94ed-5cfd0993c397", 00:14:21.244 "is_configured": true, 00:14:21.244 "data_offset": 0, 00:14:21.244 "data_size": 65536 00:14:21.244 } 00:14:21.244 ] 00:14:21.244 }' 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.244 10:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.504 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.765 [2024-11-18 10:42:47.389053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.765 "name": "Existed_Raid", 00:14:21.765 "aliases": [ 00:14:21.765 "3b8686aa-4aa5-4ae7-8165-9e619b401e6c" 00:14:21.765 ], 00:14:21.765 "product_name": "Raid Volume", 00:14:21.765 "block_size": 512, 00:14:21.765 "num_blocks": 131072, 00:14:21.765 "uuid": "3b8686aa-4aa5-4ae7-8165-9e619b401e6c", 00:14:21.765 "assigned_rate_limits": { 00:14:21.765 "rw_ios_per_sec": 0, 00:14:21.765 "rw_mbytes_per_sec": 0, 00:14:21.765 "r_mbytes_per_sec": 0, 00:14:21.765 "w_mbytes_per_sec": 0 00:14:21.765 }, 00:14:21.765 "claimed": false, 00:14:21.765 "zoned": false, 00:14:21.765 "supported_io_types": { 00:14:21.765 "read": true, 00:14:21.765 "write": true, 00:14:21.765 "unmap": false, 00:14:21.765 "flush": false, 00:14:21.765 "reset": true, 00:14:21.765 "nvme_admin": false, 00:14:21.765 "nvme_io": false, 00:14:21.765 "nvme_io_md": false, 00:14:21.765 "write_zeroes": true, 00:14:21.765 "zcopy": false, 00:14:21.765 "get_zone_info": false, 00:14:21.765 "zone_management": false, 00:14:21.765 "zone_append": false, 00:14:21.765 "compare": false, 00:14:21.765 "compare_and_write": false, 00:14:21.765 "abort": false, 00:14:21.765 "seek_hole": false, 00:14:21.765 "seek_data": false, 00:14:21.765 "copy": false, 00:14:21.765 "nvme_iov_md": false 00:14:21.765 }, 00:14:21.765 "driver_specific": { 00:14:21.765 "raid": { 00:14:21.765 "uuid": "3b8686aa-4aa5-4ae7-8165-9e619b401e6c", 00:14:21.765 "strip_size_kb": 64, 00:14:21.765 "state": "online", 00:14:21.765 "raid_level": "raid5f", 00:14:21.765 "superblock": false, 00:14:21.765 "num_base_bdevs": 3, 00:14:21.765 "num_base_bdevs_discovered": 3, 00:14:21.765 "num_base_bdevs_operational": 3, 00:14:21.765 "base_bdevs_list": [ 00:14:21.765 { 00:14:21.765 "name": "BaseBdev1", 00:14:21.765 "uuid": "da4d7fbe-aaec-4be9-ae0c-d74c61b0f491", 00:14:21.765 "is_configured": true, 00:14:21.765 "data_offset": 0, 00:14:21.765 "data_size": 65536 00:14:21.765 }, 00:14:21.765 { 00:14:21.765 "name": "BaseBdev2", 00:14:21.765 "uuid": "10643bc2-0780-413a-a767-ad202dfd3d45", 00:14:21.765 "is_configured": true, 00:14:21.765 "data_offset": 0, 00:14:21.765 "data_size": 65536 00:14:21.765 }, 00:14:21.765 { 00:14:21.765 "name": "BaseBdev3", 00:14:21.765 "uuid": "5234ef69-d8e1-4136-94ed-5cfd0993c397", 00:14:21.765 "is_configured": true, 00:14:21.765 "data_offset": 0, 00:14:21.765 "data_size": 65536 00:14:21.765 } 00:14:21.765 ] 00:14:21.765 } 00:14:21.765 } 00:14:21.765 }' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.765 BaseBdev2 00:14:21.765 BaseBdev3' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.765 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.765 [2024-11-18 10:42:47.624513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.025 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.026 "name": "Existed_Raid", 00:14:22.026 "uuid": "3b8686aa-4aa5-4ae7-8165-9e619b401e6c", 00:14:22.026 "strip_size_kb": 64, 00:14:22.026 "state": "online", 00:14:22.026 "raid_level": "raid5f", 00:14:22.026 "superblock": false, 00:14:22.026 "num_base_bdevs": 3, 00:14:22.026 "num_base_bdevs_discovered": 2, 00:14:22.026 "num_base_bdevs_operational": 2, 00:14:22.026 "base_bdevs_list": [ 00:14:22.026 { 00:14:22.026 "name": null, 00:14:22.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.026 "is_configured": false, 00:14:22.026 "data_offset": 0, 00:14:22.026 "data_size": 65536 00:14:22.026 }, 00:14:22.026 { 00:14:22.026 "name": "BaseBdev2", 00:14:22.026 "uuid": "10643bc2-0780-413a-a767-ad202dfd3d45", 00:14:22.026 "is_configured": true, 00:14:22.026 "data_offset": 0, 00:14:22.026 "data_size": 65536 00:14:22.026 }, 00:14:22.026 { 00:14:22.026 "name": "BaseBdev3", 00:14:22.026 "uuid": "5234ef69-d8e1-4136-94ed-5cfd0993c397", 00:14:22.026 "is_configured": true, 00:14:22.026 "data_offset": 0, 00:14:22.026 "data_size": 65536 00:14:22.026 } 00:14:22.026 ] 00:14:22.026 }' 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.026 10:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.596 [2024-11-18 10:42:48.236146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.596 [2024-11-18 10:42:48.236331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.596 [2024-11-18 10:42:48.324261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.596 [2024-11-18 10:42:48.384202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.596 [2024-11-18 10:42:48.384293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.596 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 BaseBdev2 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 [ 00:14:22.857 { 00:14:22.857 "name": "BaseBdev2", 00:14:22.857 "aliases": [ 00:14:22.857 "ae701f1a-8717-44ae-864b-5a229d6cadc1" 00:14:22.857 ], 00:14:22.857 "product_name": "Malloc disk", 00:14:22.857 "block_size": 512, 00:14:22.857 "num_blocks": 65536, 00:14:22.857 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:22.857 "assigned_rate_limits": { 00:14:22.857 "rw_ios_per_sec": 0, 00:14:22.857 "rw_mbytes_per_sec": 0, 00:14:22.857 "r_mbytes_per_sec": 0, 00:14:22.857 "w_mbytes_per_sec": 0 00:14:22.857 }, 00:14:22.857 "claimed": false, 00:14:22.857 "zoned": false, 00:14:22.857 "supported_io_types": { 00:14:22.857 "read": true, 00:14:22.857 "write": true, 00:14:22.857 "unmap": true, 00:14:22.857 "flush": true, 00:14:22.857 "reset": true, 00:14:22.857 "nvme_admin": false, 00:14:22.857 "nvme_io": false, 00:14:22.857 "nvme_io_md": false, 00:14:22.857 "write_zeroes": true, 00:14:22.857 "zcopy": true, 00:14:22.857 "get_zone_info": false, 00:14:22.857 "zone_management": false, 00:14:22.857 "zone_append": false, 00:14:22.857 "compare": false, 00:14:22.857 "compare_and_write": false, 00:14:22.857 "abort": true, 00:14:22.857 "seek_hole": false, 00:14:22.857 "seek_data": false, 00:14:22.857 "copy": true, 00:14:22.857 "nvme_iov_md": false 00:14:22.857 }, 00:14:22.857 "memory_domains": [ 00:14:22.857 { 00:14:22.857 "dma_device_id": "system", 00:14:22.857 "dma_device_type": 1 00:14:22.857 }, 00:14:22.857 { 00:14:22.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.857 "dma_device_type": 2 00:14:22.857 } 00:14:22.857 ], 00:14:22.857 "driver_specific": {} 00:14:22.857 } 00:14:22.857 ] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 BaseBdev3 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 [ 00:14:22.857 { 00:14:22.857 "name": "BaseBdev3", 00:14:22.857 "aliases": [ 00:14:22.857 "b6909784-9b50-4d2a-ba8f-e3552ceed8ab" 00:14:22.857 ], 00:14:22.857 "product_name": "Malloc disk", 00:14:22.857 "block_size": 512, 00:14:22.857 "num_blocks": 65536, 00:14:22.857 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:22.857 "assigned_rate_limits": { 00:14:22.857 "rw_ios_per_sec": 0, 00:14:22.857 "rw_mbytes_per_sec": 0, 00:14:22.857 "r_mbytes_per_sec": 0, 00:14:22.857 "w_mbytes_per_sec": 0 00:14:22.857 }, 00:14:22.857 "claimed": false, 00:14:22.857 "zoned": false, 00:14:22.857 "supported_io_types": { 00:14:22.857 "read": true, 00:14:22.857 "write": true, 00:14:22.857 "unmap": true, 00:14:22.857 "flush": true, 00:14:22.857 "reset": true, 00:14:22.857 "nvme_admin": false, 00:14:22.857 "nvme_io": false, 00:14:22.857 "nvme_io_md": false, 00:14:22.857 "write_zeroes": true, 00:14:22.857 "zcopy": true, 00:14:22.857 "get_zone_info": false, 00:14:22.857 "zone_management": false, 00:14:22.857 "zone_append": false, 00:14:22.857 "compare": false, 00:14:22.857 "compare_and_write": false, 00:14:22.857 "abort": true, 00:14:22.857 "seek_hole": false, 00:14:22.857 "seek_data": false, 00:14:22.857 "copy": true, 00:14:22.857 "nvme_iov_md": false 00:14:22.857 }, 00:14:22.857 "memory_domains": [ 00:14:22.857 { 00:14:22.857 "dma_device_id": "system", 00:14:22.857 "dma_device_type": 1 00:14:22.857 }, 00:14:22.857 { 00:14:22.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.857 "dma_device_type": 2 00:14:22.857 } 00:14:22.857 ], 00:14:22.857 "driver_specific": {} 00:14:22.857 } 00:14:22.857 ] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.857 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.858 [2024-11-18 10:42:48.688069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.858 [2024-11-18 10:42:48.688201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.858 [2024-11-18 10:42:48.688244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.858 [2024-11-18 10:42:48.689871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.858 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.117 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.117 "name": "Existed_Raid", 00:14:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.117 "strip_size_kb": 64, 00:14:23.117 "state": "configuring", 00:14:23.117 "raid_level": "raid5f", 00:14:23.117 "superblock": false, 00:14:23.117 "num_base_bdevs": 3, 00:14:23.117 "num_base_bdevs_discovered": 2, 00:14:23.117 "num_base_bdevs_operational": 3, 00:14:23.117 "base_bdevs_list": [ 00:14:23.117 { 00:14:23.117 "name": "BaseBdev1", 00:14:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.117 "is_configured": false, 00:14:23.118 "data_offset": 0, 00:14:23.118 "data_size": 0 00:14:23.118 }, 00:14:23.118 { 00:14:23.118 "name": "BaseBdev2", 00:14:23.118 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:23.118 "is_configured": true, 00:14:23.118 "data_offset": 0, 00:14:23.118 "data_size": 65536 00:14:23.118 }, 00:14:23.118 { 00:14:23.118 "name": "BaseBdev3", 00:14:23.118 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:23.118 "is_configured": true, 00:14:23.118 "data_offset": 0, 00:14:23.118 "data_size": 65536 00:14:23.118 } 00:14:23.118 ] 00:14:23.118 }' 00:14:23.118 10:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.118 10:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.378 [2024-11-18 10:42:49.091327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.378 "name": "Existed_Raid", 00:14:23.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.378 "strip_size_kb": 64, 00:14:23.378 "state": "configuring", 00:14:23.378 "raid_level": "raid5f", 00:14:23.378 "superblock": false, 00:14:23.378 "num_base_bdevs": 3, 00:14:23.378 "num_base_bdevs_discovered": 1, 00:14:23.378 "num_base_bdevs_operational": 3, 00:14:23.378 "base_bdevs_list": [ 00:14:23.378 { 00:14:23.378 "name": "BaseBdev1", 00:14:23.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.378 "is_configured": false, 00:14:23.378 "data_offset": 0, 00:14:23.378 "data_size": 0 00:14:23.378 }, 00:14:23.378 { 00:14:23.378 "name": null, 00:14:23.378 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:23.378 "is_configured": false, 00:14:23.378 "data_offset": 0, 00:14:23.378 "data_size": 65536 00:14:23.378 }, 00:14:23.378 { 00:14:23.378 "name": "BaseBdev3", 00:14:23.378 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:23.378 "is_configured": true, 00:14:23.378 "data_offset": 0, 00:14:23.378 "data_size": 65536 00:14:23.378 } 00:14:23.378 ] 00:14:23.378 }' 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.378 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [2024-11-18 10:42:49.605783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.946 BaseBdev1 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [ 00:14:23.946 { 00:14:23.946 "name": "BaseBdev1", 00:14:23.946 "aliases": [ 00:14:23.946 "92ecb6c6-34ea-41f4-9e83-18d9db84e9da" 00:14:23.946 ], 00:14:23.946 "product_name": "Malloc disk", 00:14:23.946 "block_size": 512, 00:14:23.946 "num_blocks": 65536, 00:14:23.946 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:23.946 "assigned_rate_limits": { 00:14:23.946 "rw_ios_per_sec": 0, 00:14:23.946 "rw_mbytes_per_sec": 0, 00:14:23.946 "r_mbytes_per_sec": 0, 00:14:23.946 "w_mbytes_per_sec": 0 00:14:23.946 }, 00:14:23.946 "claimed": true, 00:14:23.946 "claim_type": "exclusive_write", 00:14:23.946 "zoned": false, 00:14:23.946 "supported_io_types": { 00:14:23.946 "read": true, 00:14:23.946 "write": true, 00:14:23.946 "unmap": true, 00:14:23.946 "flush": true, 00:14:23.946 "reset": true, 00:14:23.946 "nvme_admin": false, 00:14:23.946 "nvme_io": false, 00:14:23.946 "nvme_io_md": false, 00:14:23.946 "write_zeroes": true, 00:14:23.946 "zcopy": true, 00:14:23.946 "get_zone_info": false, 00:14:23.946 "zone_management": false, 00:14:23.946 "zone_append": false, 00:14:23.946 "compare": false, 00:14:23.946 "compare_and_write": false, 00:14:23.946 "abort": true, 00:14:23.946 "seek_hole": false, 00:14:23.946 "seek_data": false, 00:14:23.946 "copy": true, 00:14:23.946 "nvme_iov_md": false 00:14:23.946 }, 00:14:23.946 "memory_domains": [ 00:14:23.946 { 00:14:23.946 "dma_device_id": "system", 00:14:23.946 "dma_device_type": 1 00:14:23.946 }, 00:14:23.946 { 00:14:23.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.946 "dma_device_type": 2 00:14:23.946 } 00:14:23.946 ], 00:14:23.946 "driver_specific": {} 00:14:23.946 } 00:14:23.946 ] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.946 "name": "Existed_Raid", 00:14:23.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.946 "strip_size_kb": 64, 00:14:23.946 "state": "configuring", 00:14:23.946 "raid_level": "raid5f", 00:14:23.946 "superblock": false, 00:14:23.946 "num_base_bdevs": 3, 00:14:23.946 "num_base_bdevs_discovered": 2, 00:14:23.946 "num_base_bdevs_operational": 3, 00:14:23.946 "base_bdevs_list": [ 00:14:23.946 { 00:14:23.946 "name": "BaseBdev1", 00:14:23.946 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:23.946 "is_configured": true, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 }, 00:14:23.946 { 00:14:23.946 "name": null, 00:14:23.946 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:23.946 "is_configured": false, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 }, 00:14:23.946 { 00:14:23.946 "name": "BaseBdev3", 00:14:23.946 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:23.946 "is_configured": true, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 } 00:14:23.946 ] 00:14:23.946 }' 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.946 10:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.516 [2024-11-18 10:42:50.164880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.516 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.516 "name": "Existed_Raid", 00:14:24.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.516 "strip_size_kb": 64, 00:14:24.516 "state": "configuring", 00:14:24.516 "raid_level": "raid5f", 00:14:24.516 "superblock": false, 00:14:24.516 "num_base_bdevs": 3, 00:14:24.516 "num_base_bdevs_discovered": 1, 00:14:24.516 "num_base_bdevs_operational": 3, 00:14:24.516 "base_bdevs_list": [ 00:14:24.516 { 00:14:24.516 "name": "BaseBdev1", 00:14:24.516 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:24.516 "is_configured": true, 00:14:24.516 "data_offset": 0, 00:14:24.516 "data_size": 65536 00:14:24.516 }, 00:14:24.516 { 00:14:24.516 "name": null, 00:14:24.516 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:24.516 "is_configured": false, 00:14:24.516 "data_offset": 0, 00:14:24.516 "data_size": 65536 00:14:24.516 }, 00:14:24.516 { 00:14:24.516 "name": null, 00:14:24.516 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:24.516 "is_configured": false, 00:14:24.516 "data_offset": 0, 00:14:24.516 "data_size": 65536 00:14:24.516 } 00:14:24.516 ] 00:14:24.516 }' 00:14:24.517 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.517 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.776 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.777 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.777 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.036 [2024-11-18 10:42:50.664028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.036 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.036 "name": "Existed_Raid", 00:14:25.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.036 "strip_size_kb": 64, 00:14:25.036 "state": "configuring", 00:14:25.036 "raid_level": "raid5f", 00:14:25.036 "superblock": false, 00:14:25.036 "num_base_bdevs": 3, 00:14:25.036 "num_base_bdevs_discovered": 2, 00:14:25.036 "num_base_bdevs_operational": 3, 00:14:25.036 "base_bdevs_list": [ 00:14:25.036 { 00:14:25.036 "name": "BaseBdev1", 00:14:25.036 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:25.036 "is_configured": true, 00:14:25.036 "data_offset": 0, 00:14:25.036 "data_size": 65536 00:14:25.036 }, 00:14:25.036 { 00:14:25.036 "name": null, 00:14:25.036 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:25.036 "is_configured": false, 00:14:25.036 "data_offset": 0, 00:14:25.036 "data_size": 65536 00:14:25.036 }, 00:14:25.036 { 00:14:25.036 "name": "BaseBdev3", 00:14:25.036 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:25.036 "is_configured": true, 00:14:25.036 "data_offset": 0, 00:14:25.036 "data_size": 65536 00:14:25.036 } 00:14:25.036 ] 00:14:25.036 }' 00:14:25.037 10:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.037 10:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.296 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.296 [2024-11-18 10:42:51.159195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.556 "name": "Existed_Raid", 00:14:25.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.556 "strip_size_kb": 64, 00:14:25.556 "state": "configuring", 00:14:25.556 "raid_level": "raid5f", 00:14:25.556 "superblock": false, 00:14:25.556 "num_base_bdevs": 3, 00:14:25.556 "num_base_bdevs_discovered": 1, 00:14:25.556 "num_base_bdevs_operational": 3, 00:14:25.556 "base_bdevs_list": [ 00:14:25.556 { 00:14:25.556 "name": null, 00:14:25.556 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:25.556 "is_configured": false, 00:14:25.556 "data_offset": 0, 00:14:25.556 "data_size": 65536 00:14:25.556 }, 00:14:25.556 { 00:14:25.556 "name": null, 00:14:25.556 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:25.556 "is_configured": false, 00:14:25.556 "data_offset": 0, 00:14:25.556 "data_size": 65536 00:14:25.556 }, 00:14:25.556 { 00:14:25.556 "name": "BaseBdev3", 00:14:25.556 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:25.556 "is_configured": true, 00:14:25.556 "data_offset": 0, 00:14:25.556 "data_size": 65536 00:14:25.556 } 00:14:25.556 ] 00:14:25.556 }' 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.556 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 [2024-11-18 10:42:51.767954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.126 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.126 "name": "Existed_Raid", 00:14:26.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.126 "strip_size_kb": 64, 00:14:26.126 "state": "configuring", 00:14:26.126 "raid_level": "raid5f", 00:14:26.126 "superblock": false, 00:14:26.126 "num_base_bdevs": 3, 00:14:26.126 "num_base_bdevs_discovered": 2, 00:14:26.126 "num_base_bdevs_operational": 3, 00:14:26.126 "base_bdevs_list": [ 00:14:26.126 { 00:14:26.126 "name": null, 00:14:26.126 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:26.126 "is_configured": false, 00:14:26.126 "data_offset": 0, 00:14:26.126 "data_size": 65536 00:14:26.126 }, 00:14:26.126 { 00:14:26.126 "name": "BaseBdev2", 00:14:26.126 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:26.126 "is_configured": true, 00:14:26.126 "data_offset": 0, 00:14:26.126 "data_size": 65536 00:14:26.126 }, 00:14:26.126 { 00:14:26.126 "name": "BaseBdev3", 00:14:26.126 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:26.126 "is_configured": true, 00:14:26.126 "data_offset": 0, 00:14:26.127 "data_size": 65536 00:14:26.127 } 00:14:26.127 ] 00:14:26.127 }' 00:14:26.127 10:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.127 10:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 92ecb6c6-34ea-41f4-9e83-18d9db84e9da 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.410 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.678 [2024-11-18 10:42:52.277595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.678 [2024-11-18 10:42:52.277705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.678 [2024-11-18 10:42:52.277719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:26.678 [2024-11-18 10:42:52.277953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.678 [2024-11-18 10:42:52.283278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.678 [2024-11-18 10:42:52.283298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.678 [2024-11-18 10:42:52.283542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.678 NewBaseBdev 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.678 [ 00:14:26.678 { 00:14:26.678 "name": "NewBaseBdev", 00:14:26.678 "aliases": [ 00:14:26.678 "92ecb6c6-34ea-41f4-9e83-18d9db84e9da" 00:14:26.678 ], 00:14:26.678 "product_name": "Malloc disk", 00:14:26.678 "block_size": 512, 00:14:26.678 "num_blocks": 65536, 00:14:26.678 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:26.678 "assigned_rate_limits": { 00:14:26.678 "rw_ios_per_sec": 0, 00:14:26.678 "rw_mbytes_per_sec": 0, 00:14:26.678 "r_mbytes_per_sec": 0, 00:14:26.678 "w_mbytes_per_sec": 0 00:14:26.678 }, 00:14:26.678 "claimed": true, 00:14:26.678 "claim_type": "exclusive_write", 00:14:26.678 "zoned": false, 00:14:26.678 "supported_io_types": { 00:14:26.678 "read": true, 00:14:26.678 "write": true, 00:14:26.678 "unmap": true, 00:14:26.678 "flush": true, 00:14:26.678 "reset": true, 00:14:26.678 "nvme_admin": false, 00:14:26.678 "nvme_io": false, 00:14:26.678 "nvme_io_md": false, 00:14:26.678 "write_zeroes": true, 00:14:26.678 "zcopy": true, 00:14:26.678 "get_zone_info": false, 00:14:26.678 "zone_management": false, 00:14:26.678 "zone_append": false, 00:14:26.678 "compare": false, 00:14:26.678 "compare_and_write": false, 00:14:26.678 "abort": true, 00:14:26.678 "seek_hole": false, 00:14:26.678 "seek_data": false, 00:14:26.678 "copy": true, 00:14:26.678 "nvme_iov_md": false 00:14:26.678 }, 00:14:26.678 "memory_domains": [ 00:14:26.678 { 00:14:26.678 "dma_device_id": "system", 00:14:26.678 "dma_device_type": 1 00:14:26.678 }, 00:14:26.678 { 00:14:26.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.678 "dma_device_type": 2 00:14:26.678 } 00:14:26.678 ], 00:14:26.678 "driver_specific": {} 00:14:26.678 } 00:14:26.678 ] 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.678 "name": "Existed_Raid", 00:14:26.678 "uuid": "9965d223-4541-4a0b-b0c4-866d452fdd9b", 00:14:26.678 "strip_size_kb": 64, 00:14:26.678 "state": "online", 00:14:26.678 "raid_level": "raid5f", 00:14:26.678 "superblock": false, 00:14:26.678 "num_base_bdevs": 3, 00:14:26.678 "num_base_bdevs_discovered": 3, 00:14:26.678 "num_base_bdevs_operational": 3, 00:14:26.678 "base_bdevs_list": [ 00:14:26.678 { 00:14:26.678 "name": "NewBaseBdev", 00:14:26.678 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:26.678 "is_configured": true, 00:14:26.678 "data_offset": 0, 00:14:26.678 "data_size": 65536 00:14:26.678 }, 00:14:26.678 { 00:14:26.678 "name": "BaseBdev2", 00:14:26.678 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:26.678 "is_configured": true, 00:14:26.678 "data_offset": 0, 00:14:26.678 "data_size": 65536 00:14:26.678 }, 00:14:26.678 { 00:14:26.678 "name": "BaseBdev3", 00:14:26.678 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:26.678 "is_configured": true, 00:14:26.678 "data_offset": 0, 00:14:26.678 "data_size": 65536 00:14:26.678 } 00:14:26.678 ] 00:14:26.678 }' 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.678 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.938 [2024-11-18 10:42:52.768816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.938 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.938 "name": "Existed_Raid", 00:14:26.938 "aliases": [ 00:14:26.938 "9965d223-4541-4a0b-b0c4-866d452fdd9b" 00:14:26.938 ], 00:14:26.938 "product_name": "Raid Volume", 00:14:26.938 "block_size": 512, 00:14:26.938 "num_blocks": 131072, 00:14:26.938 "uuid": "9965d223-4541-4a0b-b0c4-866d452fdd9b", 00:14:26.938 "assigned_rate_limits": { 00:14:26.938 "rw_ios_per_sec": 0, 00:14:26.938 "rw_mbytes_per_sec": 0, 00:14:26.938 "r_mbytes_per_sec": 0, 00:14:26.938 "w_mbytes_per_sec": 0 00:14:26.938 }, 00:14:26.938 "claimed": false, 00:14:26.938 "zoned": false, 00:14:26.938 "supported_io_types": { 00:14:26.938 "read": true, 00:14:26.938 "write": true, 00:14:26.938 "unmap": false, 00:14:26.938 "flush": false, 00:14:26.938 "reset": true, 00:14:26.938 "nvme_admin": false, 00:14:26.938 "nvme_io": false, 00:14:26.938 "nvme_io_md": false, 00:14:26.938 "write_zeroes": true, 00:14:26.938 "zcopy": false, 00:14:26.938 "get_zone_info": false, 00:14:26.938 "zone_management": false, 00:14:26.938 "zone_append": false, 00:14:26.938 "compare": false, 00:14:26.938 "compare_and_write": false, 00:14:26.938 "abort": false, 00:14:26.938 "seek_hole": false, 00:14:26.938 "seek_data": false, 00:14:26.938 "copy": false, 00:14:26.938 "nvme_iov_md": false 00:14:26.938 }, 00:14:26.938 "driver_specific": { 00:14:26.938 "raid": { 00:14:26.938 "uuid": "9965d223-4541-4a0b-b0c4-866d452fdd9b", 00:14:26.938 "strip_size_kb": 64, 00:14:26.938 "state": "online", 00:14:26.938 "raid_level": "raid5f", 00:14:26.938 "superblock": false, 00:14:26.938 "num_base_bdevs": 3, 00:14:26.939 "num_base_bdevs_discovered": 3, 00:14:26.939 "num_base_bdevs_operational": 3, 00:14:26.939 "base_bdevs_list": [ 00:14:26.939 { 00:14:26.939 "name": "NewBaseBdev", 00:14:26.939 "uuid": "92ecb6c6-34ea-41f4-9e83-18d9db84e9da", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 0, 00:14:26.939 "data_size": 65536 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "name": "BaseBdev2", 00:14:26.939 "uuid": "ae701f1a-8717-44ae-864b-5a229d6cadc1", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 0, 00:14:26.939 "data_size": 65536 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "name": "BaseBdev3", 00:14:26.939 "uuid": "b6909784-9b50-4d2a-ba8f-e3552ceed8ab", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 0, 00:14:26.939 "data_size": 65536 00:14:26.939 } 00:14:26.939 ] 00:14:26.939 } 00:14:26.939 } 00:14:26.939 }' 00:14:26.939 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:27.199 BaseBdev2 00:14:27.199 BaseBdev3' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.199 10:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.200 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.200 10:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.200 [2024-11-18 10:42:53.044259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.200 [2024-11-18 10:42:53.044282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.200 [2024-11-18 10:42:53.044343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.200 [2024-11-18 10:42:53.044597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.200 [2024-11-18 10:42:53.044609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79685 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79685 ']' 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79685 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.200 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79685 00:14:27.460 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.460 killing process with pid 79685 00:14:27.460 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.460 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79685' 00:14:27.460 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79685 00:14:27.460 [2024-11-18 10:42:53.094713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.460 10:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79685 00:14:27.720 [2024-11-18 10:42:53.375532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.659 10:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.660 00:14:28.660 real 0m10.469s 00:14:28.660 user 0m16.678s 00:14:28.660 sys 0m1.965s 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.660 ************************************ 00:14:28.660 END TEST raid5f_state_function_test 00:14:28.660 ************************************ 00:14:28.660 10:42:54 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:28.660 10:42:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:28.660 10:42:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.660 10:42:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.660 ************************************ 00:14:28.660 START TEST raid5f_state_function_test_sb 00:14:28.660 ************************************ 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80302 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:28.660 Process raid pid: 80302 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80302' 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80302 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80302 ']' 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.660 10:42:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.920 [2024-11-18 10:42:54.604819] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:28.920 [2024-11-18 10:42:54.605033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.920 [2024-11-18 10:42:54.786918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.180 [2024-11-18 10:42:54.894413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.440 [2024-11-18 10:42:55.085436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.440 [2024-11-18 10:42:55.085466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.700 [2024-11-18 10:42:55.401878] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.700 [2024-11-18 10:42:55.401932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.700 [2024-11-18 10:42:55.401942] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.700 [2024-11-18 10:42:55.401951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.700 [2024-11-18 10:42:55.401957] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.700 [2024-11-18 10:42:55.401965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.700 "name": "Existed_Raid", 00:14:29.700 "uuid": "21672ec3-f673-41f9-a119-36d5a4a61e34", 00:14:29.700 "strip_size_kb": 64, 00:14:29.700 "state": "configuring", 00:14:29.700 "raid_level": "raid5f", 00:14:29.700 "superblock": true, 00:14:29.700 "num_base_bdevs": 3, 00:14:29.700 "num_base_bdevs_discovered": 0, 00:14:29.700 "num_base_bdevs_operational": 3, 00:14:29.700 "base_bdevs_list": [ 00:14:29.700 { 00:14:29.700 "name": "BaseBdev1", 00:14:29.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.700 "is_configured": false, 00:14:29.700 "data_offset": 0, 00:14:29.700 "data_size": 0 00:14:29.700 }, 00:14:29.700 { 00:14:29.700 "name": "BaseBdev2", 00:14:29.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.700 "is_configured": false, 00:14:29.700 "data_offset": 0, 00:14:29.700 "data_size": 0 00:14:29.700 }, 00:14:29.700 { 00:14:29.700 "name": "BaseBdev3", 00:14:29.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.700 "is_configured": false, 00:14:29.700 "data_offset": 0, 00:14:29.700 "data_size": 0 00:14:29.700 } 00:14:29.700 ] 00:14:29.700 }' 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.700 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.959 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.959 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.959 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.219 [2024-11-18 10:42:55.845036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.219 [2024-11-18 10:42:55.845140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:30.219 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.219 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.219 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.219 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.220 [2024-11-18 10:42:55.857034] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.220 [2024-11-18 10:42:55.857117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.220 [2024-11-18 10:42:55.857143] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.220 [2024-11-18 10:42:55.857163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.220 [2024-11-18 10:42:55.857197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.220 [2024-11-18 10:42:55.857217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.220 [2024-11-18 10:42:55.898983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.220 BaseBdev1 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.220 [ 00:14:30.220 { 00:14:30.220 "name": "BaseBdev1", 00:14:30.220 "aliases": [ 00:14:30.220 "c041ba6d-0ae4-4fda-8de6-ca4e35211504" 00:14:30.220 ], 00:14:30.220 "product_name": "Malloc disk", 00:14:30.220 "block_size": 512, 00:14:30.220 "num_blocks": 65536, 00:14:30.220 "uuid": "c041ba6d-0ae4-4fda-8de6-ca4e35211504", 00:14:30.220 "assigned_rate_limits": { 00:14:30.220 "rw_ios_per_sec": 0, 00:14:30.220 "rw_mbytes_per_sec": 0, 00:14:30.220 "r_mbytes_per_sec": 0, 00:14:30.220 "w_mbytes_per_sec": 0 00:14:30.220 }, 00:14:30.220 "claimed": true, 00:14:30.220 "claim_type": "exclusive_write", 00:14:30.220 "zoned": false, 00:14:30.220 "supported_io_types": { 00:14:30.220 "read": true, 00:14:30.220 "write": true, 00:14:30.220 "unmap": true, 00:14:30.220 "flush": true, 00:14:30.220 "reset": true, 00:14:30.220 "nvme_admin": false, 00:14:30.220 "nvme_io": false, 00:14:30.220 "nvme_io_md": false, 00:14:30.220 "write_zeroes": true, 00:14:30.220 "zcopy": true, 00:14:30.220 "get_zone_info": false, 00:14:30.220 "zone_management": false, 00:14:30.220 "zone_append": false, 00:14:30.220 "compare": false, 00:14:30.220 "compare_and_write": false, 00:14:30.220 "abort": true, 00:14:30.220 "seek_hole": false, 00:14:30.220 "seek_data": false, 00:14:30.220 "copy": true, 00:14:30.220 "nvme_iov_md": false 00:14:30.220 }, 00:14:30.220 "memory_domains": [ 00:14:30.220 { 00:14:30.220 "dma_device_id": "system", 00:14:30.220 "dma_device_type": 1 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.220 "dma_device_type": 2 00:14:30.220 } 00:14:30.220 ], 00:14:30.220 "driver_specific": {} 00:14:30.220 } 00:14:30.220 ] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.220 "name": "Existed_Raid", 00:14:30.220 "uuid": "17a2d45e-52da-4946-a6ef-7fbbf938089e", 00:14:30.220 "strip_size_kb": 64, 00:14:30.220 "state": "configuring", 00:14:30.220 "raid_level": "raid5f", 00:14:30.220 "superblock": true, 00:14:30.220 "num_base_bdevs": 3, 00:14:30.220 "num_base_bdevs_discovered": 1, 00:14:30.220 "num_base_bdevs_operational": 3, 00:14:30.220 "base_bdevs_list": [ 00:14:30.220 { 00:14:30.220 "name": "BaseBdev1", 00:14:30.220 "uuid": "c041ba6d-0ae4-4fda-8de6-ca4e35211504", 00:14:30.220 "is_configured": true, 00:14:30.220 "data_offset": 2048, 00:14:30.220 "data_size": 63488 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "name": "BaseBdev2", 00:14:30.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.220 "is_configured": false, 00:14:30.220 "data_offset": 0, 00:14:30.220 "data_size": 0 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "name": "BaseBdev3", 00:14:30.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.220 "is_configured": false, 00:14:30.220 "data_offset": 0, 00:14:30.220 "data_size": 0 00:14:30.220 } 00:14:30.220 ] 00:14:30.220 }' 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.220 10:42:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.790 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.790 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.790 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.790 [2024-11-18 10:42:56.406257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.791 [2024-11-18 10:42:56.406356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.791 [2024-11-18 10:42:56.418292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.791 [2024-11-18 10:42:56.420006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.791 [2024-11-18 10:42:56.420080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.791 [2024-11-18 10:42:56.420107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.791 [2024-11-18 10:42:56.420129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.791 "name": "Existed_Raid", 00:14:30.791 "uuid": "3e953203-a971-4438-830e-b6cb5d683c10", 00:14:30.791 "strip_size_kb": 64, 00:14:30.791 "state": "configuring", 00:14:30.791 "raid_level": "raid5f", 00:14:30.791 "superblock": true, 00:14:30.791 "num_base_bdevs": 3, 00:14:30.791 "num_base_bdevs_discovered": 1, 00:14:30.791 "num_base_bdevs_operational": 3, 00:14:30.791 "base_bdevs_list": [ 00:14:30.791 { 00:14:30.791 "name": "BaseBdev1", 00:14:30.791 "uuid": "c041ba6d-0ae4-4fda-8de6-ca4e35211504", 00:14:30.791 "is_configured": true, 00:14:30.791 "data_offset": 2048, 00:14:30.791 "data_size": 63488 00:14:30.791 }, 00:14:30.791 { 00:14:30.791 "name": "BaseBdev2", 00:14:30.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.791 "is_configured": false, 00:14:30.791 "data_offset": 0, 00:14:30.791 "data_size": 0 00:14:30.791 }, 00:14:30.791 { 00:14:30.791 "name": "BaseBdev3", 00:14:30.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.791 "is_configured": false, 00:14:30.791 "data_offset": 0, 00:14:30.791 "data_size": 0 00:14:30.791 } 00:14:30.791 ] 00:14:30.791 }' 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.791 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.051 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.051 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.051 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.311 [2024-11-18 10:42:56.936891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.311 BaseBdev2 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.311 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.311 [ 00:14:31.311 { 00:14:31.312 "name": "BaseBdev2", 00:14:31.312 "aliases": [ 00:14:31.312 "98c5d91b-be89-431c-b23e-6d4b70400bc5" 00:14:31.312 ], 00:14:31.312 "product_name": "Malloc disk", 00:14:31.312 "block_size": 512, 00:14:31.312 "num_blocks": 65536, 00:14:31.312 "uuid": "98c5d91b-be89-431c-b23e-6d4b70400bc5", 00:14:31.312 "assigned_rate_limits": { 00:14:31.312 "rw_ios_per_sec": 0, 00:14:31.312 "rw_mbytes_per_sec": 0, 00:14:31.312 "r_mbytes_per_sec": 0, 00:14:31.312 "w_mbytes_per_sec": 0 00:14:31.312 }, 00:14:31.312 "claimed": true, 00:14:31.312 "claim_type": "exclusive_write", 00:14:31.312 "zoned": false, 00:14:31.312 "supported_io_types": { 00:14:31.312 "read": true, 00:14:31.312 "write": true, 00:14:31.312 "unmap": true, 00:14:31.312 "flush": true, 00:14:31.312 "reset": true, 00:14:31.312 "nvme_admin": false, 00:14:31.312 "nvme_io": false, 00:14:31.312 "nvme_io_md": false, 00:14:31.312 "write_zeroes": true, 00:14:31.312 "zcopy": true, 00:14:31.312 "get_zone_info": false, 00:14:31.312 "zone_management": false, 00:14:31.312 "zone_append": false, 00:14:31.312 "compare": false, 00:14:31.312 "compare_and_write": false, 00:14:31.312 "abort": true, 00:14:31.312 "seek_hole": false, 00:14:31.312 "seek_data": false, 00:14:31.312 "copy": true, 00:14:31.312 "nvme_iov_md": false 00:14:31.312 }, 00:14:31.312 "memory_domains": [ 00:14:31.312 { 00:14:31.312 "dma_device_id": "system", 00:14:31.312 "dma_device_type": 1 00:14:31.312 }, 00:14:31.312 { 00:14:31.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.312 "dma_device_type": 2 00:14:31.312 } 00:14:31.312 ], 00:14:31.312 "driver_specific": {} 00:14:31.312 } 00:14:31.312 ] 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.312 10:42:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.312 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.312 "name": "Existed_Raid", 00:14:31.312 "uuid": "3e953203-a971-4438-830e-b6cb5d683c10", 00:14:31.312 "strip_size_kb": 64, 00:14:31.312 "state": "configuring", 00:14:31.312 "raid_level": "raid5f", 00:14:31.312 "superblock": true, 00:14:31.312 "num_base_bdevs": 3, 00:14:31.312 "num_base_bdevs_discovered": 2, 00:14:31.312 "num_base_bdevs_operational": 3, 00:14:31.312 "base_bdevs_list": [ 00:14:31.312 { 00:14:31.312 "name": "BaseBdev1", 00:14:31.312 "uuid": "c041ba6d-0ae4-4fda-8de6-ca4e35211504", 00:14:31.312 "is_configured": true, 00:14:31.312 "data_offset": 2048, 00:14:31.312 "data_size": 63488 00:14:31.312 }, 00:14:31.312 { 00:14:31.312 "name": "BaseBdev2", 00:14:31.312 "uuid": "98c5d91b-be89-431c-b23e-6d4b70400bc5", 00:14:31.312 "is_configured": true, 00:14:31.312 "data_offset": 2048, 00:14:31.312 "data_size": 63488 00:14:31.312 }, 00:14:31.312 { 00:14:31.312 "name": "BaseBdev3", 00:14:31.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.312 "is_configured": false, 00:14:31.312 "data_offset": 0, 00:14:31.312 "data_size": 0 00:14:31.312 } 00:14:31.312 ] 00:14:31.312 }' 00:14:31.312 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.312 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.882 [2024-11-18 10:42:57.544654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.882 [2024-11-18 10:42:57.544999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:31.882 [2024-11-18 10:42:57.545068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.882 [2024-11-18 10:42:57.545369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:31.882 BaseBdev3 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.882 [2024-11-18 10:42:57.550123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:31.882 [2024-11-18 10:42:57.550209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:31.882 [2024-11-18 10:42:57.550433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.882 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.882 [ 00:14:31.882 { 00:14:31.882 "name": "BaseBdev3", 00:14:31.882 "aliases": [ 00:14:31.882 "54941e66-dfa3-4652-a474-237c76e0bf3f" 00:14:31.882 ], 00:14:31.882 "product_name": "Malloc disk", 00:14:31.882 "block_size": 512, 00:14:31.882 "num_blocks": 65536, 00:14:31.882 "uuid": "54941e66-dfa3-4652-a474-237c76e0bf3f", 00:14:31.882 "assigned_rate_limits": { 00:14:31.882 "rw_ios_per_sec": 0, 00:14:31.882 "rw_mbytes_per_sec": 0, 00:14:31.882 "r_mbytes_per_sec": 0, 00:14:31.882 "w_mbytes_per_sec": 0 00:14:31.882 }, 00:14:31.882 "claimed": true, 00:14:31.882 "claim_type": "exclusive_write", 00:14:31.882 "zoned": false, 00:14:31.882 "supported_io_types": { 00:14:31.882 "read": true, 00:14:31.882 "write": true, 00:14:31.882 "unmap": true, 00:14:31.882 "flush": true, 00:14:31.882 "reset": true, 00:14:31.882 "nvme_admin": false, 00:14:31.882 "nvme_io": false, 00:14:31.882 "nvme_io_md": false, 00:14:31.882 "write_zeroes": true, 00:14:31.882 "zcopy": true, 00:14:31.882 "get_zone_info": false, 00:14:31.882 "zone_management": false, 00:14:31.882 "zone_append": false, 00:14:31.882 "compare": false, 00:14:31.882 "compare_and_write": false, 00:14:31.882 "abort": true, 00:14:31.882 "seek_hole": false, 00:14:31.882 "seek_data": false, 00:14:31.882 "copy": true, 00:14:31.883 "nvme_iov_md": false 00:14:31.883 }, 00:14:31.883 "memory_domains": [ 00:14:31.883 { 00:14:31.883 "dma_device_id": "system", 00:14:31.883 "dma_device_type": 1 00:14:31.883 }, 00:14:31.883 { 00:14:31.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.883 "dma_device_type": 2 00:14:31.883 } 00:14:31.883 ], 00:14:31.883 "driver_specific": {} 00:14:31.883 } 00:14:31.883 ] 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.883 "name": "Existed_Raid", 00:14:31.883 "uuid": "3e953203-a971-4438-830e-b6cb5d683c10", 00:14:31.883 "strip_size_kb": 64, 00:14:31.883 "state": "online", 00:14:31.883 "raid_level": "raid5f", 00:14:31.883 "superblock": true, 00:14:31.883 "num_base_bdevs": 3, 00:14:31.883 "num_base_bdevs_discovered": 3, 00:14:31.883 "num_base_bdevs_operational": 3, 00:14:31.883 "base_bdevs_list": [ 00:14:31.883 { 00:14:31.883 "name": "BaseBdev1", 00:14:31.883 "uuid": "c041ba6d-0ae4-4fda-8de6-ca4e35211504", 00:14:31.883 "is_configured": true, 00:14:31.883 "data_offset": 2048, 00:14:31.883 "data_size": 63488 00:14:31.883 }, 00:14:31.883 { 00:14:31.883 "name": "BaseBdev2", 00:14:31.883 "uuid": "98c5d91b-be89-431c-b23e-6d4b70400bc5", 00:14:31.883 "is_configured": true, 00:14:31.883 "data_offset": 2048, 00:14:31.883 "data_size": 63488 00:14:31.883 }, 00:14:31.883 { 00:14:31.883 "name": "BaseBdev3", 00:14:31.883 "uuid": "54941e66-dfa3-4652-a474-237c76e0bf3f", 00:14:31.883 "is_configured": true, 00:14:31.883 "data_offset": 2048, 00:14:31.883 "data_size": 63488 00:14:31.883 } 00:14:31.883 ] 00:14:31.883 }' 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.883 10:42:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.143 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:32.143 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:32.143 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:32.143 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:32.143 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:32.143 10:42:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:32.143 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:32.143 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:32.143 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.143 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.143 [2024-11-18 10:42:58.011601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:32.403 "name": "Existed_Raid", 00:14:32.403 "aliases": [ 00:14:32.403 "3e953203-a971-4438-830e-b6cb5d683c10" 00:14:32.403 ], 00:14:32.403 "product_name": "Raid Volume", 00:14:32.403 "block_size": 512, 00:14:32.403 "num_blocks": 126976, 00:14:32.403 "uuid": "3e953203-a971-4438-830e-b6cb5d683c10", 00:14:32.403 "assigned_rate_limits": { 00:14:32.403 "rw_ios_per_sec": 0, 00:14:32.403 "rw_mbytes_per_sec": 0, 00:14:32.403 "r_mbytes_per_sec": 0, 00:14:32.403 "w_mbytes_per_sec": 0 00:14:32.403 }, 00:14:32.403 "claimed": false, 00:14:32.403 "zoned": false, 00:14:32.403 "supported_io_types": { 00:14:32.403 "read": true, 00:14:32.403 "write": true, 00:14:32.403 "unmap": false, 00:14:32.403 "flush": false, 00:14:32.403 "reset": true, 00:14:32.403 "nvme_admin": false, 00:14:32.403 "nvme_io": false, 00:14:32.403 "nvme_io_md": false, 00:14:32.403 "write_zeroes": true, 00:14:32.403 "zcopy": false, 00:14:32.403 "get_zone_info": false, 00:14:32.403 "zone_management": false, 00:14:32.403 "zone_append": false, 00:14:32.403 "compare": false, 00:14:32.403 "compare_and_write": false, 00:14:32.403 "abort": false, 00:14:32.403 "seek_hole": false, 00:14:32.403 "seek_data": false, 00:14:32.403 "copy": false, 00:14:32.403 "nvme_iov_md": false 00:14:32.403 }, 00:14:32.403 "driver_specific": { 00:14:32.403 "raid": { 00:14:32.403 "uuid": "3e953203-a971-4438-830e-b6cb5d683c10", 00:14:32.403 "strip_size_kb": 64, 00:14:32.403 "state": "online", 00:14:32.403 "raid_level": "raid5f", 00:14:32.403 "superblock": true, 00:14:32.403 "num_base_bdevs": 3, 00:14:32.403 "num_base_bdevs_discovered": 3, 00:14:32.403 "num_base_bdevs_operational": 3, 00:14:32.403 "base_bdevs_list": [ 00:14:32.403 { 00:14:32.403 "name": "BaseBdev1", 00:14:32.403 "uuid": "c041ba6d-0ae4-4fda-8de6-ca4e35211504", 00:14:32.403 "is_configured": true, 00:14:32.403 "data_offset": 2048, 00:14:32.403 "data_size": 63488 00:14:32.403 }, 00:14:32.403 { 00:14:32.403 "name": "BaseBdev2", 00:14:32.403 "uuid": "98c5d91b-be89-431c-b23e-6d4b70400bc5", 00:14:32.403 "is_configured": true, 00:14:32.403 "data_offset": 2048, 00:14:32.403 "data_size": 63488 00:14:32.403 }, 00:14:32.403 { 00:14:32.403 "name": "BaseBdev3", 00:14:32.403 "uuid": "54941e66-dfa3-4652-a474-237c76e0bf3f", 00:14:32.403 "is_configured": true, 00:14:32.403 "data_offset": 2048, 00:14:32.403 "data_size": 63488 00:14:32.403 } 00:14:32.403 ] 00:14:32.403 } 00:14:32.403 } 00:14:32.403 }' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:32.403 BaseBdev2 00:14:32.403 BaseBdev3' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.403 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.404 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.404 [2024-11-18 10:42:58.263182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.664 "name": "Existed_Raid", 00:14:32.664 "uuid": "3e953203-a971-4438-830e-b6cb5d683c10", 00:14:32.664 "strip_size_kb": 64, 00:14:32.664 "state": "online", 00:14:32.664 "raid_level": "raid5f", 00:14:32.664 "superblock": true, 00:14:32.664 "num_base_bdevs": 3, 00:14:32.664 "num_base_bdevs_discovered": 2, 00:14:32.664 "num_base_bdevs_operational": 2, 00:14:32.664 "base_bdevs_list": [ 00:14:32.664 { 00:14:32.664 "name": null, 00:14:32.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.664 "is_configured": false, 00:14:32.664 "data_offset": 0, 00:14:32.664 "data_size": 63488 00:14:32.664 }, 00:14:32.664 { 00:14:32.664 "name": "BaseBdev2", 00:14:32.664 "uuid": "98c5d91b-be89-431c-b23e-6d4b70400bc5", 00:14:32.664 "is_configured": true, 00:14:32.664 "data_offset": 2048, 00:14:32.664 "data_size": 63488 00:14:32.664 }, 00:14:32.664 { 00:14:32.664 "name": "BaseBdev3", 00:14:32.664 "uuid": "54941e66-dfa3-4652-a474-237c76e0bf3f", 00:14:32.664 "is_configured": true, 00:14:32.664 "data_offset": 2048, 00:14:32.664 "data_size": 63488 00:14:32.664 } 00:14:32.664 ] 00:14:32.664 }' 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.664 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.234 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:33.234 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:33.234 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.234 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.235 [2024-11-18 10:42:58.882821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.235 [2024-11-18 10:42:58.883030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.235 [2024-11-18 10:42:58.971047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.235 10:42:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.235 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:33.235 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:33.235 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:33.235 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.235 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.235 [2024-11-18 10:42:59.030939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:33.235 [2024-11-18 10:42:59.031031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:33.496 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.496 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:33.496 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:33.496 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.497 BaseBdev2 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.497 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.497 [ 00:14:33.497 { 00:14:33.497 "name": "BaseBdev2", 00:14:33.497 "aliases": [ 00:14:33.497 "280e9d12-a21f-4066-a574-f00553994eef" 00:14:33.497 ], 00:14:33.497 "product_name": "Malloc disk", 00:14:33.497 "block_size": 512, 00:14:33.497 "num_blocks": 65536, 00:14:33.497 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:33.497 "assigned_rate_limits": { 00:14:33.497 "rw_ios_per_sec": 0, 00:14:33.497 "rw_mbytes_per_sec": 0, 00:14:33.497 "r_mbytes_per_sec": 0, 00:14:33.497 "w_mbytes_per_sec": 0 00:14:33.497 }, 00:14:33.497 "claimed": false, 00:14:33.497 "zoned": false, 00:14:33.497 "supported_io_types": { 00:14:33.497 "read": true, 00:14:33.497 "write": true, 00:14:33.498 "unmap": true, 00:14:33.498 "flush": true, 00:14:33.498 "reset": true, 00:14:33.498 "nvme_admin": false, 00:14:33.498 "nvme_io": false, 00:14:33.498 "nvme_io_md": false, 00:14:33.498 "write_zeroes": true, 00:14:33.498 "zcopy": true, 00:14:33.498 "get_zone_info": false, 00:14:33.498 "zone_management": false, 00:14:33.498 "zone_append": false, 00:14:33.498 "compare": false, 00:14:33.498 "compare_and_write": false, 00:14:33.498 "abort": true, 00:14:33.498 "seek_hole": false, 00:14:33.498 "seek_data": false, 00:14:33.498 "copy": true, 00:14:33.498 "nvme_iov_md": false 00:14:33.498 }, 00:14:33.498 "memory_domains": [ 00:14:33.498 { 00:14:33.498 "dma_device_id": "system", 00:14:33.498 "dma_device_type": 1 00:14:33.498 }, 00:14:33.498 { 00:14:33.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.498 "dma_device_type": 2 00:14:33.498 } 00:14:33.498 ], 00:14:33.498 "driver_specific": {} 00:14:33.498 } 00:14:33.498 ] 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.498 BaseBdev3 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.498 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.498 [ 00:14:33.498 { 00:14:33.498 "name": "BaseBdev3", 00:14:33.498 "aliases": [ 00:14:33.498 "f0a62002-2c58-49b0-b093-50fd852a5a78" 00:14:33.498 ], 00:14:33.498 "product_name": "Malloc disk", 00:14:33.498 "block_size": 512, 00:14:33.498 "num_blocks": 65536, 00:14:33.498 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:33.498 "assigned_rate_limits": { 00:14:33.498 "rw_ios_per_sec": 0, 00:14:33.498 "rw_mbytes_per_sec": 0, 00:14:33.498 "r_mbytes_per_sec": 0, 00:14:33.498 "w_mbytes_per_sec": 0 00:14:33.498 }, 00:14:33.499 "claimed": false, 00:14:33.499 "zoned": false, 00:14:33.499 "supported_io_types": { 00:14:33.499 "read": true, 00:14:33.499 "write": true, 00:14:33.499 "unmap": true, 00:14:33.499 "flush": true, 00:14:33.499 "reset": true, 00:14:33.499 "nvme_admin": false, 00:14:33.499 "nvme_io": false, 00:14:33.499 "nvme_io_md": false, 00:14:33.499 "write_zeroes": true, 00:14:33.499 "zcopy": true, 00:14:33.499 "get_zone_info": false, 00:14:33.499 "zone_management": false, 00:14:33.499 "zone_append": false, 00:14:33.499 "compare": false, 00:14:33.499 "compare_and_write": false, 00:14:33.499 "abort": true, 00:14:33.499 "seek_hole": false, 00:14:33.499 "seek_data": false, 00:14:33.499 "copy": true, 00:14:33.499 "nvme_iov_md": false 00:14:33.499 }, 00:14:33.499 "memory_domains": [ 00:14:33.499 { 00:14:33.499 "dma_device_id": "system", 00:14:33.499 "dma_device_type": 1 00:14:33.499 }, 00:14:33.499 { 00:14:33.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.499 "dma_device_type": 2 00:14:33.499 } 00:14:33.499 ], 00:14:33.499 "driver_specific": {} 00:14:33.499 } 00:14:33.499 ] 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.499 [2024-11-18 10:42:59.332318] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.499 [2024-11-18 10:42:59.332453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.499 [2024-11-18 10:42:59.332493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.499 [2024-11-18 10:42:59.334161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.499 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.500 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.760 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.760 "name": "Existed_Raid", 00:14:33.760 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:33.760 "strip_size_kb": 64, 00:14:33.760 "state": "configuring", 00:14:33.760 "raid_level": "raid5f", 00:14:33.760 "superblock": true, 00:14:33.760 "num_base_bdevs": 3, 00:14:33.760 "num_base_bdevs_discovered": 2, 00:14:33.760 "num_base_bdevs_operational": 3, 00:14:33.760 "base_bdevs_list": [ 00:14:33.760 { 00:14:33.760 "name": "BaseBdev1", 00:14:33.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.760 "is_configured": false, 00:14:33.760 "data_offset": 0, 00:14:33.760 "data_size": 0 00:14:33.760 }, 00:14:33.760 { 00:14:33.760 "name": "BaseBdev2", 00:14:33.760 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:33.760 "is_configured": true, 00:14:33.760 "data_offset": 2048, 00:14:33.760 "data_size": 63488 00:14:33.760 }, 00:14:33.760 { 00:14:33.760 "name": "BaseBdev3", 00:14:33.760 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:33.760 "is_configured": true, 00:14:33.760 "data_offset": 2048, 00:14:33.760 "data_size": 63488 00:14:33.760 } 00:14:33.760 ] 00:14:33.760 }' 00:14:33.760 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.760 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.020 [2024-11-18 10:42:59.819466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.020 "name": "Existed_Raid", 00:14:34.020 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:34.020 "strip_size_kb": 64, 00:14:34.020 "state": "configuring", 00:14:34.020 "raid_level": "raid5f", 00:14:34.020 "superblock": true, 00:14:34.020 "num_base_bdevs": 3, 00:14:34.020 "num_base_bdevs_discovered": 1, 00:14:34.020 "num_base_bdevs_operational": 3, 00:14:34.020 "base_bdevs_list": [ 00:14:34.020 { 00:14:34.020 "name": "BaseBdev1", 00:14:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.020 "is_configured": false, 00:14:34.020 "data_offset": 0, 00:14:34.020 "data_size": 0 00:14:34.020 }, 00:14:34.020 { 00:14:34.020 "name": null, 00:14:34.020 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:34.020 "is_configured": false, 00:14:34.020 "data_offset": 0, 00:14:34.020 "data_size": 63488 00:14:34.020 }, 00:14:34.020 { 00:14:34.020 "name": "BaseBdev3", 00:14:34.020 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:34.020 "is_configured": true, 00:14:34.020 "data_offset": 2048, 00:14:34.020 "data_size": 63488 00:14:34.020 } 00:14:34.020 ] 00:14:34.020 }' 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.020 10:42:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 [2024-11-18 10:43:00.369698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.590 BaseBdev1 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.590 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 [ 00:14:34.590 { 00:14:34.590 "name": "BaseBdev1", 00:14:34.590 "aliases": [ 00:14:34.590 "16674450-b375-4700-b641-46665ed9edff" 00:14:34.590 ], 00:14:34.590 "product_name": "Malloc disk", 00:14:34.590 "block_size": 512, 00:14:34.590 "num_blocks": 65536, 00:14:34.590 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:34.590 "assigned_rate_limits": { 00:14:34.590 "rw_ios_per_sec": 0, 00:14:34.590 "rw_mbytes_per_sec": 0, 00:14:34.590 "r_mbytes_per_sec": 0, 00:14:34.590 "w_mbytes_per_sec": 0 00:14:34.590 }, 00:14:34.590 "claimed": true, 00:14:34.590 "claim_type": "exclusive_write", 00:14:34.590 "zoned": false, 00:14:34.590 "supported_io_types": { 00:14:34.590 "read": true, 00:14:34.590 "write": true, 00:14:34.590 "unmap": true, 00:14:34.590 "flush": true, 00:14:34.590 "reset": true, 00:14:34.590 "nvme_admin": false, 00:14:34.590 "nvme_io": false, 00:14:34.591 "nvme_io_md": false, 00:14:34.591 "write_zeroes": true, 00:14:34.591 "zcopy": true, 00:14:34.591 "get_zone_info": false, 00:14:34.591 "zone_management": false, 00:14:34.591 "zone_append": false, 00:14:34.591 "compare": false, 00:14:34.591 "compare_and_write": false, 00:14:34.591 "abort": true, 00:14:34.591 "seek_hole": false, 00:14:34.591 "seek_data": false, 00:14:34.591 "copy": true, 00:14:34.591 "nvme_iov_md": false 00:14:34.591 }, 00:14:34.591 "memory_domains": [ 00:14:34.591 { 00:14:34.591 "dma_device_id": "system", 00:14:34.591 "dma_device_type": 1 00:14:34.591 }, 00:14:34.591 { 00:14:34.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.591 "dma_device_type": 2 00:14:34.591 } 00:14:34.591 ], 00:14:34.591 "driver_specific": {} 00:14:34.591 } 00:14:34.591 ] 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.591 "name": "Existed_Raid", 00:14:34.591 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:34.591 "strip_size_kb": 64, 00:14:34.591 "state": "configuring", 00:14:34.591 "raid_level": "raid5f", 00:14:34.591 "superblock": true, 00:14:34.591 "num_base_bdevs": 3, 00:14:34.591 "num_base_bdevs_discovered": 2, 00:14:34.591 "num_base_bdevs_operational": 3, 00:14:34.591 "base_bdevs_list": [ 00:14:34.591 { 00:14:34.591 "name": "BaseBdev1", 00:14:34.591 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:34.591 "is_configured": true, 00:14:34.591 "data_offset": 2048, 00:14:34.591 "data_size": 63488 00:14:34.591 }, 00:14:34.591 { 00:14:34.591 "name": null, 00:14:34.591 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:34.591 "is_configured": false, 00:14:34.591 "data_offset": 0, 00:14:34.591 "data_size": 63488 00:14:34.591 }, 00:14:34.591 { 00:14:34.591 "name": "BaseBdev3", 00:14:34.591 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:34.591 "is_configured": true, 00:14:34.591 "data_offset": 2048, 00:14:34.591 "data_size": 63488 00:14:34.591 } 00:14:34.591 ] 00:14:34.591 }' 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.591 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 [2024-11-18 10:43:00.884815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.161 "name": "Existed_Raid", 00:14:35.161 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:35.161 "strip_size_kb": 64, 00:14:35.161 "state": "configuring", 00:14:35.161 "raid_level": "raid5f", 00:14:35.161 "superblock": true, 00:14:35.161 "num_base_bdevs": 3, 00:14:35.161 "num_base_bdevs_discovered": 1, 00:14:35.161 "num_base_bdevs_operational": 3, 00:14:35.161 "base_bdevs_list": [ 00:14:35.161 { 00:14:35.161 "name": "BaseBdev1", 00:14:35.161 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:35.161 "is_configured": true, 00:14:35.161 "data_offset": 2048, 00:14:35.161 "data_size": 63488 00:14:35.161 }, 00:14:35.161 { 00:14:35.161 "name": null, 00:14:35.161 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:35.161 "is_configured": false, 00:14:35.161 "data_offset": 0, 00:14:35.161 "data_size": 63488 00:14:35.161 }, 00:14:35.161 { 00:14:35.161 "name": null, 00:14:35.161 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:35.161 "is_configured": false, 00:14:35.161 "data_offset": 0, 00:14:35.161 "data_size": 63488 00:14:35.161 } 00:14:35.161 ] 00:14:35.161 }' 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.161 10:43:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:35.731 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.732 [2024-11-18 10:43:01.400106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.732 "name": "Existed_Raid", 00:14:35.732 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:35.732 "strip_size_kb": 64, 00:14:35.732 "state": "configuring", 00:14:35.732 "raid_level": "raid5f", 00:14:35.732 "superblock": true, 00:14:35.732 "num_base_bdevs": 3, 00:14:35.732 "num_base_bdevs_discovered": 2, 00:14:35.732 "num_base_bdevs_operational": 3, 00:14:35.732 "base_bdevs_list": [ 00:14:35.732 { 00:14:35.732 "name": "BaseBdev1", 00:14:35.732 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:35.732 "is_configured": true, 00:14:35.732 "data_offset": 2048, 00:14:35.732 "data_size": 63488 00:14:35.732 }, 00:14:35.732 { 00:14:35.732 "name": null, 00:14:35.732 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:35.732 "is_configured": false, 00:14:35.732 "data_offset": 0, 00:14:35.732 "data_size": 63488 00:14:35.732 }, 00:14:35.732 { 00:14:35.732 "name": "BaseBdev3", 00:14:35.732 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:35.732 "is_configured": true, 00:14:35.732 "data_offset": 2048, 00:14:35.732 "data_size": 63488 00:14:35.732 } 00:14:35.732 ] 00:14:35.732 }' 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.732 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.992 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.992 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:35.992 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.992 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.992 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.251 [2024-11-18 10:43:01.887314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.251 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.252 10:43:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.252 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.252 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.252 "name": "Existed_Raid", 00:14:36.252 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:36.252 "strip_size_kb": 64, 00:14:36.252 "state": "configuring", 00:14:36.252 "raid_level": "raid5f", 00:14:36.252 "superblock": true, 00:14:36.252 "num_base_bdevs": 3, 00:14:36.252 "num_base_bdevs_discovered": 1, 00:14:36.252 "num_base_bdevs_operational": 3, 00:14:36.252 "base_bdevs_list": [ 00:14:36.252 { 00:14:36.252 "name": null, 00:14:36.252 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:36.252 "is_configured": false, 00:14:36.252 "data_offset": 0, 00:14:36.252 "data_size": 63488 00:14:36.252 }, 00:14:36.252 { 00:14:36.252 "name": null, 00:14:36.252 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:36.252 "is_configured": false, 00:14:36.252 "data_offset": 0, 00:14:36.252 "data_size": 63488 00:14:36.252 }, 00:14:36.252 { 00:14:36.252 "name": "BaseBdev3", 00:14:36.252 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:36.252 "is_configured": true, 00:14:36.252 "data_offset": 2048, 00:14:36.252 "data_size": 63488 00:14:36.252 } 00:14:36.252 ] 00:14:36.252 }' 00:14:36.252 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.252 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.822 [2024-11-18 10:43:02.491166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.822 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.823 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.823 "name": "Existed_Raid", 00:14:36.823 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:36.823 "strip_size_kb": 64, 00:14:36.823 "state": "configuring", 00:14:36.823 "raid_level": "raid5f", 00:14:36.823 "superblock": true, 00:14:36.823 "num_base_bdevs": 3, 00:14:36.823 "num_base_bdevs_discovered": 2, 00:14:36.823 "num_base_bdevs_operational": 3, 00:14:36.823 "base_bdevs_list": [ 00:14:36.823 { 00:14:36.823 "name": null, 00:14:36.823 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:36.823 "is_configured": false, 00:14:36.823 "data_offset": 0, 00:14:36.823 "data_size": 63488 00:14:36.823 }, 00:14:36.823 { 00:14:36.823 "name": "BaseBdev2", 00:14:36.823 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:36.823 "is_configured": true, 00:14:36.823 "data_offset": 2048, 00:14:36.823 "data_size": 63488 00:14:36.823 }, 00:14:36.823 { 00:14:36.823 "name": "BaseBdev3", 00:14:36.823 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:36.823 "is_configured": true, 00:14:36.823 "data_offset": 2048, 00:14:36.823 "data_size": 63488 00:14:36.823 } 00:14:36.823 ] 00:14:36.823 }' 00:14:36.823 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.823 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.082 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.082 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:37.082 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.082 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.082 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.343 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:37.343 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.343 10:43:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:37.343 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.343 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.343 10:43:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 16674450-b375-4700-b641-46665ed9edff 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.343 [2024-11-18 10:43:03.060669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:37.343 [2024-11-18 10:43:03.060936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:37.343 [2024-11-18 10:43:03.060975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.343 [2024-11-18 10:43:03.061247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:37.343 NewBaseBdev 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.343 [2024-11-18 10:43:03.066469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:37.343 [2024-11-18 10:43:03.066530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:37.343 [2024-11-18 10:43:03.066697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.343 [ 00:14:37.343 { 00:14:37.343 "name": "NewBaseBdev", 00:14:37.343 "aliases": [ 00:14:37.343 "16674450-b375-4700-b641-46665ed9edff" 00:14:37.343 ], 00:14:37.343 "product_name": "Malloc disk", 00:14:37.343 "block_size": 512, 00:14:37.343 "num_blocks": 65536, 00:14:37.343 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:37.343 "assigned_rate_limits": { 00:14:37.343 "rw_ios_per_sec": 0, 00:14:37.343 "rw_mbytes_per_sec": 0, 00:14:37.343 "r_mbytes_per_sec": 0, 00:14:37.343 "w_mbytes_per_sec": 0 00:14:37.343 }, 00:14:37.343 "claimed": true, 00:14:37.343 "claim_type": "exclusive_write", 00:14:37.343 "zoned": false, 00:14:37.343 "supported_io_types": { 00:14:37.343 "read": true, 00:14:37.343 "write": true, 00:14:37.343 "unmap": true, 00:14:37.343 "flush": true, 00:14:37.343 "reset": true, 00:14:37.343 "nvme_admin": false, 00:14:37.343 "nvme_io": false, 00:14:37.343 "nvme_io_md": false, 00:14:37.343 "write_zeroes": true, 00:14:37.343 "zcopy": true, 00:14:37.343 "get_zone_info": false, 00:14:37.343 "zone_management": false, 00:14:37.343 "zone_append": false, 00:14:37.343 "compare": false, 00:14:37.343 "compare_and_write": false, 00:14:37.343 "abort": true, 00:14:37.343 "seek_hole": false, 00:14:37.343 "seek_data": false, 00:14:37.343 "copy": true, 00:14:37.343 "nvme_iov_md": false 00:14:37.343 }, 00:14:37.343 "memory_domains": [ 00:14:37.343 { 00:14:37.343 "dma_device_id": "system", 00:14:37.343 "dma_device_type": 1 00:14:37.343 }, 00:14:37.343 { 00:14:37.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.343 "dma_device_type": 2 00:14:37.343 } 00:14:37.343 ], 00:14:37.343 "driver_specific": {} 00:14:37.343 } 00:14:37.343 ] 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.343 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.344 "name": "Existed_Raid", 00:14:37.344 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:37.344 "strip_size_kb": 64, 00:14:37.344 "state": "online", 00:14:37.344 "raid_level": "raid5f", 00:14:37.344 "superblock": true, 00:14:37.344 "num_base_bdevs": 3, 00:14:37.344 "num_base_bdevs_discovered": 3, 00:14:37.344 "num_base_bdevs_operational": 3, 00:14:37.344 "base_bdevs_list": [ 00:14:37.344 { 00:14:37.344 "name": "NewBaseBdev", 00:14:37.344 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:37.344 "is_configured": true, 00:14:37.344 "data_offset": 2048, 00:14:37.344 "data_size": 63488 00:14:37.344 }, 00:14:37.344 { 00:14:37.344 "name": "BaseBdev2", 00:14:37.344 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:37.344 "is_configured": true, 00:14:37.344 "data_offset": 2048, 00:14:37.344 "data_size": 63488 00:14:37.344 }, 00:14:37.344 { 00:14:37.344 "name": "BaseBdev3", 00:14:37.344 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:37.344 "is_configured": true, 00:14:37.344 "data_offset": 2048, 00:14:37.344 "data_size": 63488 00:14:37.344 } 00:14:37.344 ] 00:14:37.344 }' 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.344 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.913 [2024-11-18 10:43:03.547934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.913 "name": "Existed_Raid", 00:14:37.913 "aliases": [ 00:14:37.913 "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b" 00:14:37.913 ], 00:14:37.913 "product_name": "Raid Volume", 00:14:37.913 "block_size": 512, 00:14:37.913 "num_blocks": 126976, 00:14:37.913 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:37.913 "assigned_rate_limits": { 00:14:37.913 "rw_ios_per_sec": 0, 00:14:37.913 "rw_mbytes_per_sec": 0, 00:14:37.913 "r_mbytes_per_sec": 0, 00:14:37.913 "w_mbytes_per_sec": 0 00:14:37.913 }, 00:14:37.913 "claimed": false, 00:14:37.913 "zoned": false, 00:14:37.913 "supported_io_types": { 00:14:37.913 "read": true, 00:14:37.913 "write": true, 00:14:37.913 "unmap": false, 00:14:37.913 "flush": false, 00:14:37.913 "reset": true, 00:14:37.913 "nvme_admin": false, 00:14:37.913 "nvme_io": false, 00:14:37.913 "nvme_io_md": false, 00:14:37.913 "write_zeroes": true, 00:14:37.913 "zcopy": false, 00:14:37.913 "get_zone_info": false, 00:14:37.913 "zone_management": false, 00:14:37.913 "zone_append": false, 00:14:37.913 "compare": false, 00:14:37.913 "compare_and_write": false, 00:14:37.913 "abort": false, 00:14:37.913 "seek_hole": false, 00:14:37.913 "seek_data": false, 00:14:37.913 "copy": false, 00:14:37.913 "nvme_iov_md": false 00:14:37.913 }, 00:14:37.913 "driver_specific": { 00:14:37.913 "raid": { 00:14:37.913 "uuid": "8a2bd032-3ad5-46a2-87e3-51af57ba0e2b", 00:14:37.913 "strip_size_kb": 64, 00:14:37.913 "state": "online", 00:14:37.913 "raid_level": "raid5f", 00:14:37.913 "superblock": true, 00:14:37.913 "num_base_bdevs": 3, 00:14:37.913 "num_base_bdevs_discovered": 3, 00:14:37.913 "num_base_bdevs_operational": 3, 00:14:37.913 "base_bdevs_list": [ 00:14:37.913 { 00:14:37.913 "name": "NewBaseBdev", 00:14:37.913 "uuid": "16674450-b375-4700-b641-46665ed9edff", 00:14:37.913 "is_configured": true, 00:14:37.913 "data_offset": 2048, 00:14:37.913 "data_size": 63488 00:14:37.913 }, 00:14:37.913 { 00:14:37.913 "name": "BaseBdev2", 00:14:37.913 "uuid": "280e9d12-a21f-4066-a574-f00553994eef", 00:14:37.913 "is_configured": true, 00:14:37.913 "data_offset": 2048, 00:14:37.913 "data_size": 63488 00:14:37.913 }, 00:14:37.913 { 00:14:37.913 "name": "BaseBdev3", 00:14:37.913 "uuid": "f0a62002-2c58-49b0-b093-50fd852a5a78", 00:14:37.913 "is_configured": true, 00:14:37.913 "data_offset": 2048, 00:14:37.913 "data_size": 63488 00:14:37.913 } 00:14:37.913 ] 00:14:37.913 } 00:14:37.913 } 00:14:37.913 }' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:37.913 BaseBdev2 00:14:37.913 BaseBdev3' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.913 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.174 [2024-11-18 10:43:03.823310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.174 [2024-11-18 10:43:03.823331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.174 [2024-11-18 10:43:03.823384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.174 [2024-11-18 10:43:03.823627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.174 [2024-11-18 10:43:03.823639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80302 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80302 ']' 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80302 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80302 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.174 killing process with pid 80302 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80302' 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80302 00:14:38.174 [2024-11-18 10:43:03.873702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.174 10:43:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80302 00:14:38.434 [2024-11-18 10:43:04.154632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.374 10:43:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:39.375 00:14:39.375 real 0m10.695s 00:14:39.375 user 0m17.080s 00:14:39.375 sys 0m2.007s 00:14:39.375 ************************************ 00:14:39.375 END TEST raid5f_state_function_test_sb 00:14:39.375 ************************************ 00:14:39.375 10:43:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.375 10:43:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.375 10:43:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:39.375 10:43:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:39.375 10:43:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.375 10:43:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.635 ************************************ 00:14:39.635 START TEST raid5f_superblock_test 00:14:39.635 ************************************ 00:14:39.635 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:39.635 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:39.635 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:39.635 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80928 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80928 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80928 ']' 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.636 10:43:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.636 [2024-11-18 10:43:05.354103] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:39.636 [2024-11-18 10:43:05.354228] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80928 ] 00:14:39.896 [2024-11-18 10:43:05.527467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.896 [2024-11-18 10:43:05.630137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.155 [2024-11-18 10:43:05.815107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.155 [2024-11-18 10:43:05.815159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.416 malloc1 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.416 [2024-11-18 10:43:06.226393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:40.416 [2024-11-18 10:43:06.226540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.416 [2024-11-18 10:43:06.226583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:40.416 [2024-11-18 10:43:06.226612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.416 [2024-11-18 10:43:06.228654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.416 [2024-11-18 10:43:06.228727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:40.416 pt1 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.416 malloc2 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.416 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.416 [2024-11-18 10:43:06.282585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.416 [2024-11-18 10:43:06.282698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.416 [2024-11-18 10:43:06.282736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:40.416 [2024-11-18 10:43:06.282763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.416 [2024-11-18 10:43:06.284654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.416 [2024-11-18 10:43:06.284725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.417 pt2 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.417 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.677 malloc3 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.677 [2024-11-18 10:43:06.350116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.677 [2024-11-18 10:43:06.350237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.677 [2024-11-18 10:43:06.350274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:40.677 [2024-11-18 10:43:06.350302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.677 [2024-11-18 10:43:06.352190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.677 [2024-11-18 10:43:06.352256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.677 pt3 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.677 [2024-11-18 10:43:06.362152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:40.677 [2024-11-18 10:43:06.363834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.677 [2024-11-18 10:43:06.363932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.677 [2024-11-18 10:43:06.364104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:40.677 [2024-11-18 10:43:06.364161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:40.677 [2024-11-18 10:43:06.364410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:40.677 [2024-11-18 10:43:06.369261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:40.677 [2024-11-18 10:43:06.369313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:40.677 [2024-11-18 10:43:06.369491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.677 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.677 "name": "raid_bdev1", 00:14:40.677 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:40.677 "strip_size_kb": 64, 00:14:40.677 "state": "online", 00:14:40.677 "raid_level": "raid5f", 00:14:40.677 "superblock": true, 00:14:40.677 "num_base_bdevs": 3, 00:14:40.677 "num_base_bdevs_discovered": 3, 00:14:40.677 "num_base_bdevs_operational": 3, 00:14:40.677 "base_bdevs_list": [ 00:14:40.677 { 00:14:40.677 "name": "pt1", 00:14:40.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.677 "is_configured": true, 00:14:40.677 "data_offset": 2048, 00:14:40.677 "data_size": 63488 00:14:40.677 }, 00:14:40.677 { 00:14:40.677 "name": "pt2", 00:14:40.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.677 "is_configured": true, 00:14:40.677 "data_offset": 2048, 00:14:40.677 "data_size": 63488 00:14:40.678 }, 00:14:40.678 { 00:14:40.678 "name": "pt3", 00:14:40.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.678 "is_configured": true, 00:14:40.678 "data_offset": 2048, 00:14:40.678 "data_size": 63488 00:14:40.678 } 00:14:40.678 ] 00:14:40.678 }' 00:14:40.678 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.678 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.938 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.938 [2024-11-18 10:43:06.811333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.198 "name": "raid_bdev1", 00:14:41.198 "aliases": [ 00:14:41.198 "89784b80-0efd-4fe1-b4f3-4128858142b9" 00:14:41.198 ], 00:14:41.198 "product_name": "Raid Volume", 00:14:41.198 "block_size": 512, 00:14:41.198 "num_blocks": 126976, 00:14:41.198 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:41.198 "assigned_rate_limits": { 00:14:41.198 "rw_ios_per_sec": 0, 00:14:41.198 "rw_mbytes_per_sec": 0, 00:14:41.198 "r_mbytes_per_sec": 0, 00:14:41.198 "w_mbytes_per_sec": 0 00:14:41.198 }, 00:14:41.198 "claimed": false, 00:14:41.198 "zoned": false, 00:14:41.198 "supported_io_types": { 00:14:41.198 "read": true, 00:14:41.198 "write": true, 00:14:41.198 "unmap": false, 00:14:41.198 "flush": false, 00:14:41.198 "reset": true, 00:14:41.198 "nvme_admin": false, 00:14:41.198 "nvme_io": false, 00:14:41.198 "nvme_io_md": false, 00:14:41.198 "write_zeroes": true, 00:14:41.198 "zcopy": false, 00:14:41.198 "get_zone_info": false, 00:14:41.198 "zone_management": false, 00:14:41.198 "zone_append": false, 00:14:41.198 "compare": false, 00:14:41.198 "compare_and_write": false, 00:14:41.198 "abort": false, 00:14:41.198 "seek_hole": false, 00:14:41.198 "seek_data": false, 00:14:41.198 "copy": false, 00:14:41.198 "nvme_iov_md": false 00:14:41.198 }, 00:14:41.198 "driver_specific": { 00:14:41.198 "raid": { 00:14:41.198 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:41.198 "strip_size_kb": 64, 00:14:41.198 "state": "online", 00:14:41.198 "raid_level": "raid5f", 00:14:41.198 "superblock": true, 00:14:41.198 "num_base_bdevs": 3, 00:14:41.198 "num_base_bdevs_discovered": 3, 00:14:41.198 "num_base_bdevs_operational": 3, 00:14:41.198 "base_bdevs_list": [ 00:14:41.198 { 00:14:41.198 "name": "pt1", 00:14:41.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.198 "is_configured": true, 00:14:41.198 "data_offset": 2048, 00:14:41.198 "data_size": 63488 00:14:41.198 }, 00:14:41.198 { 00:14:41.198 "name": "pt2", 00:14:41.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.198 "is_configured": true, 00:14:41.198 "data_offset": 2048, 00:14:41.198 "data_size": 63488 00:14:41.198 }, 00:14:41.198 { 00:14:41.198 "name": "pt3", 00:14:41.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.198 "is_configured": true, 00:14:41.198 "data_offset": 2048, 00:14:41.198 "data_size": 63488 00:14:41.198 } 00:14:41.198 ] 00:14:41.198 } 00:14:41.198 } 00:14:41.198 }' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:41.198 pt2 00:14:41.198 pt3' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.198 10:43:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.198 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:41.459 [2024-11-18 10:43:07.083305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=89784b80-0efd-4fe1-b4f3-4128858142b9 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 89784b80-0efd-4fe1-b4f3-4128858142b9 ']' 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 [2024-11-18 10:43:07.131145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.459 [2024-11-18 10:43:07.131223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.459 [2024-11-18 10:43:07.131281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.459 [2024-11-18 10:43:07.131334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.459 [2024-11-18 10:43:07.131343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 [2024-11-18 10:43:07.287080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.459 [2024-11-18 10:43:07.288662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.459 [2024-11-18 10:43:07.288708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:41.459 [2024-11-18 10:43:07.288746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:41.459 [2024-11-18 10:43:07.288784] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:41.459 [2024-11-18 10:43:07.288800] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:41.459 [2024-11-18 10:43:07.288814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.459 [2024-11-18 10:43:07.288822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:41.459 request: 00:14:41.459 { 00:14:41.459 "name": "raid_bdev1", 00:14:41.459 "raid_level": "raid5f", 00:14:41.459 "base_bdevs": [ 00:14:41.459 "malloc1", 00:14:41.459 "malloc2", 00:14:41.459 "malloc3" 00:14:41.459 ], 00:14:41.459 "strip_size_kb": 64, 00:14:41.459 "superblock": false, 00:14:41.459 "method": "bdev_raid_create", 00:14:41.459 "req_id": 1 00:14:41.459 } 00:14:41.459 Got JSON-RPC error response 00:14:41.459 response: 00:14:41.459 { 00:14:41.459 "code": -17, 00:14:41.459 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.459 } 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:41.459 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.720 [2024-11-18 10:43:07.354912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.720 [2024-11-18 10:43:07.355002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.720 [2024-11-18 10:43:07.355034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:41.720 [2024-11-18 10:43:07.355071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.720 [2024-11-18 10:43:07.357011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.720 [2024-11-18 10:43:07.357077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.720 [2024-11-18 10:43:07.357156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.720 [2024-11-18 10:43:07.357232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.720 pt1 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.720 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.721 "name": "raid_bdev1", 00:14:41.721 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:41.721 "strip_size_kb": 64, 00:14:41.721 "state": "configuring", 00:14:41.721 "raid_level": "raid5f", 00:14:41.721 "superblock": true, 00:14:41.721 "num_base_bdevs": 3, 00:14:41.721 "num_base_bdevs_discovered": 1, 00:14:41.721 "num_base_bdevs_operational": 3, 00:14:41.721 "base_bdevs_list": [ 00:14:41.721 { 00:14:41.721 "name": "pt1", 00:14:41.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.721 "is_configured": true, 00:14:41.721 "data_offset": 2048, 00:14:41.721 "data_size": 63488 00:14:41.721 }, 00:14:41.721 { 00:14:41.721 "name": null, 00:14:41.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.721 "is_configured": false, 00:14:41.721 "data_offset": 2048, 00:14:41.721 "data_size": 63488 00:14:41.721 }, 00:14:41.721 { 00:14:41.721 "name": null, 00:14:41.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.721 "is_configured": false, 00:14:41.721 "data_offset": 2048, 00:14:41.721 "data_size": 63488 00:14:41.721 } 00:14:41.721 ] 00:14:41.721 }' 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.721 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.981 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:41.981 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.981 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.981 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.981 [2024-11-18 10:43:07.806162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.981 [2024-11-18 10:43:07.806221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.981 [2024-11-18 10:43:07.806239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:41.981 [2024-11-18 10:43:07.806258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.981 [2024-11-18 10:43:07.806609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.981 [2024-11-18 10:43:07.806646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.981 [2024-11-18 10:43:07.806709] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:41.981 [2024-11-18 10:43:07.806727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.981 pt2 00:14:41.981 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.981 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.982 [2024-11-18 10:43:07.818164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.982 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.242 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.242 "name": "raid_bdev1", 00:14:42.242 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:42.242 "strip_size_kb": 64, 00:14:42.242 "state": "configuring", 00:14:42.242 "raid_level": "raid5f", 00:14:42.242 "superblock": true, 00:14:42.242 "num_base_bdevs": 3, 00:14:42.242 "num_base_bdevs_discovered": 1, 00:14:42.242 "num_base_bdevs_operational": 3, 00:14:42.242 "base_bdevs_list": [ 00:14:42.242 { 00:14:42.242 "name": "pt1", 00:14:42.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.242 "is_configured": true, 00:14:42.242 "data_offset": 2048, 00:14:42.242 "data_size": 63488 00:14:42.242 }, 00:14:42.242 { 00:14:42.242 "name": null, 00:14:42.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.242 "is_configured": false, 00:14:42.242 "data_offset": 0, 00:14:42.242 "data_size": 63488 00:14:42.242 }, 00:14:42.242 { 00:14:42.242 "name": null, 00:14:42.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.242 "is_configured": false, 00:14:42.242 "data_offset": 2048, 00:14:42.242 "data_size": 63488 00:14:42.242 } 00:14:42.242 ] 00:14:42.242 }' 00:14:42.242 10:43:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.242 10:43:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.502 [2024-11-18 10:43:08.285309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.502 [2024-11-18 10:43:08.285403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.502 [2024-11-18 10:43:08.285431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:42.502 [2024-11-18 10:43:08.285457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.502 [2024-11-18 10:43:08.285798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.502 [2024-11-18 10:43:08.285854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.502 [2024-11-18 10:43:08.285929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.502 [2024-11-18 10:43:08.285974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.502 pt2 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.502 [2024-11-18 10:43:08.297291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.502 [2024-11-18 10:43:08.297373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.502 [2024-11-18 10:43:08.297398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:42.502 [2024-11-18 10:43:08.297422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.502 [2024-11-18 10:43:08.297743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.502 [2024-11-18 10:43:08.297801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.502 [2024-11-18 10:43:08.297874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.502 [2024-11-18 10:43:08.297920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.502 [2024-11-18 10:43:08.298039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:42.502 [2024-11-18 10:43:08.298077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.502 [2024-11-18 10:43:08.298311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:42.502 [2024-11-18 10:43:08.303185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:42.502 [2024-11-18 10:43:08.303252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:42.502 [2024-11-18 10:43:08.303461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.502 pt3 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.502 "name": "raid_bdev1", 00:14:42.502 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:42.502 "strip_size_kb": 64, 00:14:42.502 "state": "online", 00:14:42.502 "raid_level": "raid5f", 00:14:42.502 "superblock": true, 00:14:42.502 "num_base_bdevs": 3, 00:14:42.502 "num_base_bdevs_discovered": 3, 00:14:42.502 "num_base_bdevs_operational": 3, 00:14:42.502 "base_bdevs_list": [ 00:14:42.502 { 00:14:42.502 "name": "pt1", 00:14:42.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.502 "is_configured": true, 00:14:42.502 "data_offset": 2048, 00:14:42.502 "data_size": 63488 00:14:42.502 }, 00:14:42.502 { 00:14:42.502 "name": "pt2", 00:14:42.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.502 "is_configured": true, 00:14:42.502 "data_offset": 2048, 00:14:42.502 "data_size": 63488 00:14:42.502 }, 00:14:42.502 { 00:14:42.502 "name": "pt3", 00:14:42.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.502 "is_configured": true, 00:14:42.502 "data_offset": 2048, 00:14:42.502 "data_size": 63488 00:14:42.502 } 00:14:42.502 ] 00:14:42.502 }' 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.502 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.145 [2024-11-18 10:43:08.804735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.145 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.145 "name": "raid_bdev1", 00:14:43.145 "aliases": [ 00:14:43.145 "89784b80-0efd-4fe1-b4f3-4128858142b9" 00:14:43.145 ], 00:14:43.145 "product_name": "Raid Volume", 00:14:43.145 "block_size": 512, 00:14:43.145 "num_blocks": 126976, 00:14:43.145 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:43.145 "assigned_rate_limits": { 00:14:43.145 "rw_ios_per_sec": 0, 00:14:43.145 "rw_mbytes_per_sec": 0, 00:14:43.145 "r_mbytes_per_sec": 0, 00:14:43.145 "w_mbytes_per_sec": 0 00:14:43.145 }, 00:14:43.145 "claimed": false, 00:14:43.145 "zoned": false, 00:14:43.145 "supported_io_types": { 00:14:43.145 "read": true, 00:14:43.145 "write": true, 00:14:43.145 "unmap": false, 00:14:43.145 "flush": false, 00:14:43.145 "reset": true, 00:14:43.145 "nvme_admin": false, 00:14:43.145 "nvme_io": false, 00:14:43.145 "nvme_io_md": false, 00:14:43.145 "write_zeroes": true, 00:14:43.145 "zcopy": false, 00:14:43.145 "get_zone_info": false, 00:14:43.145 "zone_management": false, 00:14:43.145 "zone_append": false, 00:14:43.145 "compare": false, 00:14:43.145 "compare_and_write": false, 00:14:43.145 "abort": false, 00:14:43.145 "seek_hole": false, 00:14:43.145 "seek_data": false, 00:14:43.145 "copy": false, 00:14:43.145 "nvme_iov_md": false 00:14:43.145 }, 00:14:43.145 "driver_specific": { 00:14:43.145 "raid": { 00:14:43.145 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:43.145 "strip_size_kb": 64, 00:14:43.145 "state": "online", 00:14:43.145 "raid_level": "raid5f", 00:14:43.145 "superblock": true, 00:14:43.145 "num_base_bdevs": 3, 00:14:43.145 "num_base_bdevs_discovered": 3, 00:14:43.145 "num_base_bdevs_operational": 3, 00:14:43.146 "base_bdevs_list": [ 00:14:43.146 { 00:14:43.146 "name": "pt1", 00:14:43.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.146 "is_configured": true, 00:14:43.146 "data_offset": 2048, 00:14:43.146 "data_size": 63488 00:14:43.146 }, 00:14:43.146 { 00:14:43.146 "name": "pt2", 00:14:43.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.146 "is_configured": true, 00:14:43.146 "data_offset": 2048, 00:14:43.146 "data_size": 63488 00:14:43.146 }, 00:14:43.146 { 00:14:43.146 "name": "pt3", 00:14:43.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.146 "is_configured": true, 00:14:43.146 "data_offset": 2048, 00:14:43.146 "data_size": 63488 00:14:43.146 } 00:14:43.146 ] 00:14:43.146 } 00:14:43.146 } 00:14:43.146 }' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:43.146 pt2 00:14:43.146 pt3' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.146 10:43:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.146 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.414 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.415 [2024-11-18 10:43:09.076342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 89784b80-0efd-4fe1-b4f3-4128858142b9 '!=' 89784b80-0efd-4fe1-b4f3-4128858142b9 ']' 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.415 [2024-11-18 10:43:09.124131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.415 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.416 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.416 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.416 "name": "raid_bdev1", 00:14:43.416 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:43.416 "strip_size_kb": 64, 00:14:43.416 "state": "online", 00:14:43.416 "raid_level": "raid5f", 00:14:43.416 "superblock": true, 00:14:43.416 "num_base_bdevs": 3, 00:14:43.416 "num_base_bdevs_discovered": 2, 00:14:43.416 "num_base_bdevs_operational": 2, 00:14:43.416 "base_bdevs_list": [ 00:14:43.416 { 00:14:43.416 "name": null, 00:14:43.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.416 "is_configured": false, 00:14:43.416 "data_offset": 0, 00:14:43.416 "data_size": 63488 00:14:43.416 }, 00:14:43.416 { 00:14:43.416 "name": "pt2", 00:14:43.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.416 "is_configured": true, 00:14:43.416 "data_offset": 2048, 00:14:43.416 "data_size": 63488 00:14:43.416 }, 00:14:43.416 { 00:14:43.416 "name": "pt3", 00:14:43.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.416 "is_configured": true, 00:14:43.416 "data_offset": 2048, 00:14:43.416 "data_size": 63488 00:14:43.416 } 00:14:43.416 ] 00:14:43.416 }' 00:14:43.416 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.416 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.989 [2024-11-18 10:43:09.587281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.989 [2024-11-18 10:43:09.587352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.989 [2024-11-18 10:43:09.587403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.989 [2024-11-18 10:43:09.587445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.989 [2024-11-18 10:43:09.587458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.989 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.990 [2024-11-18 10:43:09.675146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.990 [2024-11-18 10:43:09.675245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.990 [2024-11-18 10:43:09.675262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:43.990 [2024-11-18 10:43:09.675272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.990 [2024-11-18 10:43:09.677224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.990 [2024-11-18 10:43:09.677261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.990 [2024-11-18 10:43:09.677320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:43.990 [2024-11-18 10:43:09.677359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.990 pt2 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.990 "name": "raid_bdev1", 00:14:43.990 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:43.990 "strip_size_kb": 64, 00:14:43.990 "state": "configuring", 00:14:43.990 "raid_level": "raid5f", 00:14:43.990 "superblock": true, 00:14:43.990 "num_base_bdevs": 3, 00:14:43.990 "num_base_bdevs_discovered": 1, 00:14:43.990 "num_base_bdevs_operational": 2, 00:14:43.990 "base_bdevs_list": [ 00:14:43.990 { 00:14:43.990 "name": null, 00:14:43.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.990 "is_configured": false, 00:14:43.990 "data_offset": 2048, 00:14:43.990 "data_size": 63488 00:14:43.990 }, 00:14:43.990 { 00:14:43.990 "name": "pt2", 00:14:43.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.990 "is_configured": true, 00:14:43.990 "data_offset": 2048, 00:14:43.990 "data_size": 63488 00:14:43.990 }, 00:14:43.990 { 00:14:43.990 "name": null, 00:14:43.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.990 "is_configured": false, 00:14:43.990 "data_offset": 2048, 00:14:43.990 "data_size": 63488 00:14:43.990 } 00:14:43.990 ] 00:14:43.990 }' 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.990 10:43:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.250 [2024-11-18 10:43:10.106860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.250 [2024-11-18 10:43:10.106954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.250 [2024-11-18 10:43:10.106987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:44.250 [2024-11-18 10:43:10.107015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.250 [2024-11-18 10:43:10.107404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.250 [2024-11-18 10:43:10.107464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.250 [2024-11-18 10:43:10.107544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:44.250 [2024-11-18 10:43:10.107600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.250 [2024-11-18 10:43:10.107732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:44.250 [2024-11-18 10:43:10.107771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.250 [2024-11-18 10:43:10.107996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:44.250 [2024-11-18 10:43:10.112987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:44.250 [2024-11-18 10:43:10.113038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:44.250 [2024-11-18 10:43:10.113359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.250 pt3 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.250 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.510 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.510 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.510 "name": "raid_bdev1", 00:14:44.510 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:44.510 "strip_size_kb": 64, 00:14:44.510 "state": "online", 00:14:44.510 "raid_level": "raid5f", 00:14:44.510 "superblock": true, 00:14:44.510 "num_base_bdevs": 3, 00:14:44.510 "num_base_bdevs_discovered": 2, 00:14:44.510 "num_base_bdevs_operational": 2, 00:14:44.510 "base_bdevs_list": [ 00:14:44.510 { 00:14:44.510 "name": null, 00:14:44.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.510 "is_configured": false, 00:14:44.510 "data_offset": 2048, 00:14:44.510 "data_size": 63488 00:14:44.510 }, 00:14:44.510 { 00:14:44.510 "name": "pt2", 00:14:44.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.510 "is_configured": true, 00:14:44.510 "data_offset": 2048, 00:14:44.510 "data_size": 63488 00:14:44.510 }, 00:14:44.510 { 00:14:44.510 "name": "pt3", 00:14:44.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.510 "is_configured": true, 00:14:44.510 "data_offset": 2048, 00:14:44.510 "data_size": 63488 00:14:44.510 } 00:14:44.510 ] 00:14:44.510 }' 00:14:44.510 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.510 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.771 [2024-11-18 10:43:10.534955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.771 [2024-11-18 10:43:10.535037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.771 [2024-11-18 10:43:10.535107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.771 [2024-11-18 10:43:10.535165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.771 [2024-11-18 10:43:10.535190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.771 [2024-11-18 10:43:10.606847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:44.771 [2024-11-18 10:43:10.607183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.771 [2024-11-18 10:43:10.607252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:44.771 [2024-11-18 10:43:10.607302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.771 [2024-11-18 10:43:10.609351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.771 [2024-11-18 10:43:10.609448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:44.771 [2024-11-18 10:43:10.609552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:44.771 [2024-11-18 10:43:10.609598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:44.771 [2024-11-18 10:43:10.609704] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:44.771 [2024-11-18 10:43:10.609731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.771 [2024-11-18 10:43:10.609744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:44.771 [2024-11-18 10:43:10.609800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.771 pt1 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.771 "name": "raid_bdev1", 00:14:44.771 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:44.771 "strip_size_kb": 64, 00:14:44.771 "state": "configuring", 00:14:44.771 "raid_level": "raid5f", 00:14:44.771 "superblock": true, 00:14:44.771 "num_base_bdevs": 3, 00:14:44.771 "num_base_bdevs_discovered": 1, 00:14:44.771 "num_base_bdevs_operational": 2, 00:14:44.771 "base_bdevs_list": [ 00:14:44.771 { 00:14:44.771 "name": null, 00:14:44.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.771 "is_configured": false, 00:14:44.771 "data_offset": 2048, 00:14:44.771 "data_size": 63488 00:14:44.771 }, 00:14:44.771 { 00:14:44.771 "name": "pt2", 00:14:44.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.771 "is_configured": true, 00:14:44.771 "data_offset": 2048, 00:14:44.771 "data_size": 63488 00:14:44.771 }, 00:14:44.771 { 00:14:44.771 "name": null, 00:14:44.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.771 "is_configured": false, 00:14:44.771 "data_offset": 2048, 00:14:44.771 "data_size": 63488 00:14:44.771 } 00:14:44.771 ] 00:14:44.771 }' 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.771 10:43:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.341 [2024-11-18 10:43:11.058061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:45.341 [2024-11-18 10:43:11.058397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.341 [2024-11-18 10:43:11.058432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:45.341 [2024-11-18 10:43:11.058441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.341 [2024-11-18 10:43:11.058810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.341 [2024-11-18 10:43:11.058834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:45.341 [2024-11-18 10:43:11.058893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:45.341 [2024-11-18 10:43:11.058910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.341 [2024-11-18 10:43:11.059005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:45.341 [2024-11-18 10:43:11.059020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:45.341 [2024-11-18 10:43:11.059259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:45.341 [2024-11-18 10:43:11.064478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:45.341 [2024-11-18 10:43:11.064504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:45.341 [2024-11-18 10:43:11.064702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.341 pt3 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.341 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.341 "name": "raid_bdev1", 00:14:45.341 "uuid": "89784b80-0efd-4fe1-b4f3-4128858142b9", 00:14:45.341 "strip_size_kb": 64, 00:14:45.341 "state": "online", 00:14:45.341 "raid_level": "raid5f", 00:14:45.341 "superblock": true, 00:14:45.341 "num_base_bdevs": 3, 00:14:45.341 "num_base_bdevs_discovered": 2, 00:14:45.341 "num_base_bdevs_operational": 2, 00:14:45.341 "base_bdevs_list": [ 00:14:45.341 { 00:14:45.341 "name": null, 00:14:45.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.341 "is_configured": false, 00:14:45.342 "data_offset": 2048, 00:14:45.342 "data_size": 63488 00:14:45.342 }, 00:14:45.342 { 00:14:45.342 "name": "pt2", 00:14:45.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.342 "is_configured": true, 00:14:45.342 "data_offset": 2048, 00:14:45.342 "data_size": 63488 00:14:45.342 }, 00:14:45.342 { 00:14:45.342 "name": "pt3", 00:14:45.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.342 "is_configured": true, 00:14:45.342 "data_offset": 2048, 00:14:45.342 "data_size": 63488 00:14:45.342 } 00:14:45.342 ] 00:14:45.342 }' 00:14:45.342 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.342 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:45.911 [2024-11-18 10:43:11.585926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 89784b80-0efd-4fe1-b4f3-4128858142b9 '!=' 89784b80-0efd-4fe1-b4f3-4128858142b9 ']' 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80928 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80928 ']' 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80928 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80928 00:14:45.911 killing process with pid 80928 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80928' 00:14:45.911 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80928 00:14:45.911 [2024-11-18 10:43:11.663901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.911 [2024-11-18 10:43:11.663963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.912 [2024-11-18 10:43:11.664006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.912 [2024-11-18 10:43:11.664016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:45.912 10:43:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80928 00:14:46.172 [2024-11-18 10:43:11.946927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.112 10:43:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:47.112 00:14:47.112 real 0m7.714s 00:14:47.112 user 0m12.138s 00:14:47.112 sys 0m1.455s 00:14:47.112 10:43:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.112 10:43:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.112 ************************************ 00:14:47.112 END TEST raid5f_superblock_test 00:14:47.112 ************************************ 00:14:47.373 10:43:13 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:47.373 10:43:13 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:47.373 10:43:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:47.373 10:43:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.373 10:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.373 ************************************ 00:14:47.373 START TEST raid5f_rebuild_test 00:14:47.373 ************************************ 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81366 00:14:47.373 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81366 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81366 ']' 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.374 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.374 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.374 Zero copy mechanism will not be used. 00:14:47.374 [2024-11-18 10:43:13.157712] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:47.374 [2024-11-18 10:43:13.157824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81366 ] 00:14:47.634 [2024-11-18 10:43:13.333925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.635 [2024-11-18 10:43:13.438860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.895 [2024-11-18 10:43:13.601347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.895 [2024-11-18 10:43:13.601386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.155 BaseBdev1_malloc 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.155 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.155 [2024-11-18 10:43:13.993101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:48.155 [2024-11-18 10:43:13.993184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.156 [2024-11-18 10:43:13.993209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:48.156 [2024-11-18 10:43:13.993220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.156 [2024-11-18 10:43:13.995112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.156 [2024-11-18 10:43:13.995147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.156 BaseBdev1 00:14:48.156 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.156 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.156 10:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.156 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.156 10:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 BaseBdev2_malloc 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 [2024-11-18 10:43:14.046341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:48.417 [2024-11-18 10:43:14.046392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.417 [2024-11-18 10:43:14.046409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:48.417 [2024-11-18 10:43:14.046422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.417 [2024-11-18 10:43:14.048280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.417 [2024-11-18 10:43:14.048312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.417 BaseBdev2 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 BaseBdev3_malloc 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 [2024-11-18 10:43:14.109576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:48.417 [2024-11-18 10:43:14.109623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.417 [2024-11-18 10:43:14.109641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:48.417 [2024-11-18 10:43:14.109651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.417 [2024-11-18 10:43:14.111505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.417 [2024-11-18 10:43:14.111542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:48.417 BaseBdev3 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 spare_malloc 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 spare_delay 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 [2024-11-18 10:43:14.170387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.417 [2024-11-18 10:43:14.170436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.417 [2024-11-18 10:43:14.170452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:48.417 [2024-11-18 10:43:14.170462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.417 [2024-11-18 10:43:14.172388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.417 [2024-11-18 10:43:14.172426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.417 spare 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.417 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.417 [2024-11-18 10:43:14.182427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.417 [2024-11-18 10:43:14.184041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.417 [2024-11-18 10:43:14.184102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.417 [2024-11-18 10:43:14.184189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:48.417 [2024-11-18 10:43:14.184200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.417 [2024-11-18 10:43:14.184430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:48.417 [2024-11-18 10:43:14.189421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:48.418 [2024-11-18 10:43:14.189446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:48.418 [2024-11-18 10:43:14.189616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.418 "name": "raid_bdev1", 00:14:48.418 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:48.418 "strip_size_kb": 64, 00:14:48.418 "state": "online", 00:14:48.418 "raid_level": "raid5f", 00:14:48.418 "superblock": false, 00:14:48.418 "num_base_bdevs": 3, 00:14:48.418 "num_base_bdevs_discovered": 3, 00:14:48.418 "num_base_bdevs_operational": 3, 00:14:48.418 "base_bdevs_list": [ 00:14:48.418 { 00:14:48.418 "name": "BaseBdev1", 00:14:48.418 "uuid": "9299976c-4012-5331-bee7-0e716641c8cb", 00:14:48.418 "is_configured": true, 00:14:48.418 "data_offset": 0, 00:14:48.418 "data_size": 65536 00:14:48.418 }, 00:14:48.418 { 00:14:48.418 "name": "BaseBdev2", 00:14:48.418 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:48.418 "is_configured": true, 00:14:48.418 "data_offset": 0, 00:14:48.418 "data_size": 65536 00:14:48.418 }, 00:14:48.418 { 00:14:48.418 "name": "BaseBdev3", 00:14:48.418 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:48.418 "is_configured": true, 00:14:48.418 "data_offset": 0, 00:14:48.418 "data_size": 65536 00:14:48.418 } 00:14:48.418 ] 00:14:48.418 }' 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.418 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.990 [2024-11-18 10:43:14.623362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.990 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:49.251 [2024-11-18 10:43:14.890987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:49.251 /dev/nbd0 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.251 1+0 records in 00:14:49.251 1+0 records out 00:14:49.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413834 s, 9.9 MB/s 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:49.251 10:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:49.823 512+0 records in 00:14:49.823 512+0 records out 00:14:49.823 67108864 bytes (67 MB, 64 MiB) copied, 0.411511 s, 163 MB/s 00:14:49.823 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:49.823 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.823 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:49.823 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.823 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:49.823 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:49.824 [2024-11-18 10:43:15.603721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.824 [2024-11-18 10:43:15.630425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.824 "name": "raid_bdev1", 00:14:49.824 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:49.824 "strip_size_kb": 64, 00:14:49.824 "state": "online", 00:14:49.824 "raid_level": "raid5f", 00:14:49.824 "superblock": false, 00:14:49.824 "num_base_bdevs": 3, 00:14:49.824 "num_base_bdevs_discovered": 2, 00:14:49.824 "num_base_bdevs_operational": 2, 00:14:49.824 "base_bdevs_list": [ 00:14:49.824 { 00:14:49.824 "name": null, 00:14:49.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.824 "is_configured": false, 00:14:49.824 "data_offset": 0, 00:14:49.824 "data_size": 65536 00:14:49.824 }, 00:14:49.824 { 00:14:49.824 "name": "BaseBdev2", 00:14:49.824 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:49.824 "is_configured": true, 00:14:49.824 "data_offset": 0, 00:14:49.824 "data_size": 65536 00:14:49.824 }, 00:14:49.824 { 00:14:49.824 "name": "BaseBdev3", 00:14:49.824 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:49.824 "is_configured": true, 00:14:49.824 "data_offset": 0, 00:14:49.824 "data_size": 65536 00:14:49.824 } 00:14:49.824 ] 00:14:49.824 }' 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.824 10:43:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.395 10:43:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.395 10:43:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.395 10:43:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.395 [2024-11-18 10:43:16.117622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.395 [2024-11-18 10:43:16.131813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:50.395 10:43:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.395 10:43:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:50.395 [2024-11-18 10:43:16.138977] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.336 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.336 "name": "raid_bdev1", 00:14:51.336 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:51.336 "strip_size_kb": 64, 00:14:51.336 "state": "online", 00:14:51.336 "raid_level": "raid5f", 00:14:51.336 "superblock": false, 00:14:51.336 "num_base_bdevs": 3, 00:14:51.336 "num_base_bdevs_discovered": 3, 00:14:51.336 "num_base_bdevs_operational": 3, 00:14:51.336 "process": { 00:14:51.336 "type": "rebuild", 00:14:51.336 "target": "spare", 00:14:51.336 "progress": { 00:14:51.336 "blocks": 20480, 00:14:51.336 "percent": 15 00:14:51.336 } 00:14:51.336 }, 00:14:51.336 "base_bdevs_list": [ 00:14:51.336 { 00:14:51.336 "name": "spare", 00:14:51.336 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:51.336 "is_configured": true, 00:14:51.336 "data_offset": 0, 00:14:51.336 "data_size": 65536 00:14:51.336 }, 00:14:51.336 { 00:14:51.337 "name": "BaseBdev2", 00:14:51.337 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:51.337 "is_configured": true, 00:14:51.337 "data_offset": 0, 00:14:51.337 "data_size": 65536 00:14:51.337 }, 00:14:51.337 { 00:14:51.337 "name": "BaseBdev3", 00:14:51.337 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:51.337 "is_configured": true, 00:14:51.337 "data_offset": 0, 00:14:51.337 "data_size": 65536 00:14:51.337 } 00:14:51.337 ] 00:14:51.337 }' 00:14:51.337 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.337 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.337 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.596 [2024-11-18 10:43:17.273865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.596 [2024-11-18 10:43:17.345950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.596 [2024-11-18 10:43:17.345998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.596 [2024-11-18 10:43:17.346015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.596 [2024-11-18 10:43:17.346021] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.596 "name": "raid_bdev1", 00:14:51.596 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:51.596 "strip_size_kb": 64, 00:14:51.596 "state": "online", 00:14:51.596 "raid_level": "raid5f", 00:14:51.596 "superblock": false, 00:14:51.596 "num_base_bdevs": 3, 00:14:51.596 "num_base_bdevs_discovered": 2, 00:14:51.596 "num_base_bdevs_operational": 2, 00:14:51.596 "base_bdevs_list": [ 00:14:51.596 { 00:14:51.596 "name": null, 00:14:51.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.596 "is_configured": false, 00:14:51.596 "data_offset": 0, 00:14:51.596 "data_size": 65536 00:14:51.596 }, 00:14:51.596 { 00:14:51.596 "name": "BaseBdev2", 00:14:51.596 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:51.596 "is_configured": true, 00:14:51.596 "data_offset": 0, 00:14:51.596 "data_size": 65536 00:14:51.596 }, 00:14:51.596 { 00:14:51.596 "name": "BaseBdev3", 00:14:51.596 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:51.596 "is_configured": true, 00:14:51.596 "data_offset": 0, 00:14:51.596 "data_size": 65536 00:14:51.596 } 00:14:51.596 ] 00:14:51.596 }' 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.596 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.167 "name": "raid_bdev1", 00:14:52.167 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:52.167 "strip_size_kb": 64, 00:14:52.167 "state": "online", 00:14:52.167 "raid_level": "raid5f", 00:14:52.167 "superblock": false, 00:14:52.167 "num_base_bdevs": 3, 00:14:52.167 "num_base_bdevs_discovered": 2, 00:14:52.167 "num_base_bdevs_operational": 2, 00:14:52.167 "base_bdevs_list": [ 00:14:52.167 { 00:14:52.167 "name": null, 00:14:52.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.167 "is_configured": false, 00:14:52.167 "data_offset": 0, 00:14:52.167 "data_size": 65536 00:14:52.167 }, 00:14:52.167 { 00:14:52.167 "name": "BaseBdev2", 00:14:52.167 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:52.167 "is_configured": true, 00:14:52.167 "data_offset": 0, 00:14:52.167 "data_size": 65536 00:14:52.167 }, 00:14:52.167 { 00:14:52.167 "name": "BaseBdev3", 00:14:52.167 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:52.167 "is_configured": true, 00:14:52.167 "data_offset": 0, 00:14:52.167 "data_size": 65536 00:14:52.167 } 00:14:52.167 ] 00:14:52.167 }' 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 [2024-11-18 10:43:17.974654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.167 [2024-11-18 10:43:17.989572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.167 10:43:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:52.167 [2024-11-18 10:43:17.996621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.552 10:43:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.552 10:43:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.552 10:43:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.552 10:43:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.552 10:43:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.552 "name": "raid_bdev1", 00:14:53.552 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:53.552 "strip_size_kb": 64, 00:14:53.552 "state": "online", 00:14:53.552 "raid_level": "raid5f", 00:14:53.552 "superblock": false, 00:14:53.552 "num_base_bdevs": 3, 00:14:53.552 "num_base_bdevs_discovered": 3, 00:14:53.552 "num_base_bdevs_operational": 3, 00:14:53.552 "process": { 00:14:53.552 "type": "rebuild", 00:14:53.552 "target": "spare", 00:14:53.552 "progress": { 00:14:53.552 "blocks": 20480, 00:14:53.552 "percent": 15 00:14:53.552 } 00:14:53.552 }, 00:14:53.552 "base_bdevs_list": [ 00:14:53.552 { 00:14:53.552 "name": "spare", 00:14:53.552 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev2", 00:14:53.552 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev3", 00:14:53.552 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 } 00:14:53.552 ] 00:14:53.552 }' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.552 "name": "raid_bdev1", 00:14:53.552 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:53.552 "strip_size_kb": 64, 00:14:53.552 "state": "online", 00:14:53.552 "raid_level": "raid5f", 00:14:53.552 "superblock": false, 00:14:53.552 "num_base_bdevs": 3, 00:14:53.552 "num_base_bdevs_discovered": 3, 00:14:53.552 "num_base_bdevs_operational": 3, 00:14:53.552 "process": { 00:14:53.552 "type": "rebuild", 00:14:53.552 "target": "spare", 00:14:53.552 "progress": { 00:14:53.552 "blocks": 22528, 00:14:53.552 "percent": 17 00:14:53.552 } 00:14:53.552 }, 00:14:53.552 "base_bdevs_list": [ 00:14:53.552 { 00:14:53.552 "name": "spare", 00:14:53.552 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev2", 00:14:53.552 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 }, 00:14:53.552 { 00:14:53.552 "name": "BaseBdev3", 00:14:53.552 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:53.552 "is_configured": true, 00:14:53.552 "data_offset": 0, 00:14:53.552 "data_size": 65536 00:14:53.552 } 00:14:53.552 ] 00:14:53.552 }' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.552 10:43:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.492 "name": "raid_bdev1", 00:14:54.492 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:54.492 "strip_size_kb": 64, 00:14:54.492 "state": "online", 00:14:54.492 "raid_level": "raid5f", 00:14:54.492 "superblock": false, 00:14:54.492 "num_base_bdevs": 3, 00:14:54.492 "num_base_bdevs_discovered": 3, 00:14:54.492 "num_base_bdevs_operational": 3, 00:14:54.492 "process": { 00:14:54.492 "type": "rebuild", 00:14:54.492 "target": "spare", 00:14:54.492 "progress": { 00:14:54.492 "blocks": 45056, 00:14:54.492 "percent": 34 00:14:54.492 } 00:14:54.492 }, 00:14:54.492 "base_bdevs_list": [ 00:14:54.492 { 00:14:54.492 "name": "spare", 00:14:54.492 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:54.492 "is_configured": true, 00:14:54.492 "data_offset": 0, 00:14:54.492 "data_size": 65536 00:14:54.492 }, 00:14:54.492 { 00:14:54.492 "name": "BaseBdev2", 00:14:54.492 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:54.492 "is_configured": true, 00:14:54.492 "data_offset": 0, 00:14:54.492 "data_size": 65536 00:14:54.492 }, 00:14:54.492 { 00:14:54.492 "name": "BaseBdev3", 00:14:54.492 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:54.492 "is_configured": true, 00:14:54.492 "data_offset": 0, 00:14:54.492 "data_size": 65536 00:14:54.492 } 00:14:54.492 ] 00:14:54.492 }' 00:14:54.492 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.752 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.753 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.753 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.753 10:43:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.695 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.695 "name": "raid_bdev1", 00:14:55.695 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:55.695 "strip_size_kb": 64, 00:14:55.695 "state": "online", 00:14:55.695 "raid_level": "raid5f", 00:14:55.695 "superblock": false, 00:14:55.695 "num_base_bdevs": 3, 00:14:55.696 "num_base_bdevs_discovered": 3, 00:14:55.696 "num_base_bdevs_operational": 3, 00:14:55.696 "process": { 00:14:55.696 "type": "rebuild", 00:14:55.696 "target": "spare", 00:14:55.696 "progress": { 00:14:55.696 "blocks": 69632, 00:14:55.696 "percent": 53 00:14:55.696 } 00:14:55.696 }, 00:14:55.696 "base_bdevs_list": [ 00:14:55.696 { 00:14:55.696 "name": "spare", 00:14:55.696 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:55.696 "is_configured": true, 00:14:55.696 "data_offset": 0, 00:14:55.696 "data_size": 65536 00:14:55.696 }, 00:14:55.696 { 00:14:55.696 "name": "BaseBdev2", 00:14:55.696 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:55.696 "is_configured": true, 00:14:55.696 "data_offset": 0, 00:14:55.696 "data_size": 65536 00:14:55.696 }, 00:14:55.696 { 00:14:55.696 "name": "BaseBdev3", 00:14:55.696 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:55.696 "is_configured": true, 00:14:55.696 "data_offset": 0, 00:14:55.696 "data_size": 65536 00:14:55.696 } 00:14:55.696 ] 00:14:55.696 }' 00:14:55.696 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.696 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.696 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.957 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.957 10:43:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.900 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.900 "name": "raid_bdev1", 00:14:56.901 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:56.901 "strip_size_kb": 64, 00:14:56.901 "state": "online", 00:14:56.901 "raid_level": "raid5f", 00:14:56.901 "superblock": false, 00:14:56.901 "num_base_bdevs": 3, 00:14:56.901 "num_base_bdevs_discovered": 3, 00:14:56.901 "num_base_bdevs_operational": 3, 00:14:56.901 "process": { 00:14:56.901 "type": "rebuild", 00:14:56.901 "target": "spare", 00:14:56.901 "progress": { 00:14:56.901 "blocks": 92160, 00:14:56.901 "percent": 70 00:14:56.901 } 00:14:56.901 }, 00:14:56.901 "base_bdevs_list": [ 00:14:56.901 { 00:14:56.901 "name": "spare", 00:14:56.901 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:56.901 "is_configured": true, 00:14:56.901 "data_offset": 0, 00:14:56.901 "data_size": 65536 00:14:56.901 }, 00:14:56.901 { 00:14:56.901 "name": "BaseBdev2", 00:14:56.901 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:56.901 "is_configured": true, 00:14:56.901 "data_offset": 0, 00:14:56.901 "data_size": 65536 00:14:56.901 }, 00:14:56.901 { 00:14:56.901 "name": "BaseBdev3", 00:14:56.901 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:56.901 "is_configured": true, 00:14:56.901 "data_offset": 0, 00:14:56.901 "data_size": 65536 00:14:56.901 } 00:14:56.901 ] 00:14:56.901 }' 00:14:56.901 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.901 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.901 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.901 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.901 10:43:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.288 "name": "raid_bdev1", 00:14:58.288 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:58.288 "strip_size_kb": 64, 00:14:58.288 "state": "online", 00:14:58.288 "raid_level": "raid5f", 00:14:58.288 "superblock": false, 00:14:58.288 "num_base_bdevs": 3, 00:14:58.288 "num_base_bdevs_discovered": 3, 00:14:58.288 "num_base_bdevs_operational": 3, 00:14:58.288 "process": { 00:14:58.288 "type": "rebuild", 00:14:58.288 "target": "spare", 00:14:58.288 "progress": { 00:14:58.288 "blocks": 116736, 00:14:58.288 "percent": 89 00:14:58.288 } 00:14:58.288 }, 00:14:58.288 "base_bdevs_list": [ 00:14:58.288 { 00:14:58.288 "name": "spare", 00:14:58.288 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:58.288 "is_configured": true, 00:14:58.288 "data_offset": 0, 00:14:58.288 "data_size": 65536 00:14:58.288 }, 00:14:58.288 { 00:14:58.288 "name": "BaseBdev2", 00:14:58.288 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:58.288 "is_configured": true, 00:14:58.288 "data_offset": 0, 00:14:58.288 "data_size": 65536 00:14:58.288 }, 00:14:58.288 { 00:14:58.288 "name": "BaseBdev3", 00:14:58.288 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:58.288 "is_configured": true, 00:14:58.288 "data_offset": 0, 00:14:58.288 "data_size": 65536 00:14:58.288 } 00:14:58.288 ] 00:14:58.288 }' 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.288 10:43:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.549 [2024-11-18 10:43:24.430892] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:58.549 [2024-11-18 10:43:24.430968] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:58.549 [2024-11-18 10:43:24.431001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.120 "name": "raid_bdev1", 00:14:59.120 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:59.120 "strip_size_kb": 64, 00:14:59.120 "state": "online", 00:14:59.120 "raid_level": "raid5f", 00:14:59.120 "superblock": false, 00:14:59.120 "num_base_bdevs": 3, 00:14:59.120 "num_base_bdevs_discovered": 3, 00:14:59.120 "num_base_bdevs_operational": 3, 00:14:59.120 "base_bdevs_list": [ 00:14:59.120 { 00:14:59.120 "name": "spare", 00:14:59.120 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:59.120 "is_configured": true, 00:14:59.120 "data_offset": 0, 00:14:59.120 "data_size": 65536 00:14:59.120 }, 00:14:59.120 { 00:14:59.120 "name": "BaseBdev2", 00:14:59.120 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:59.120 "is_configured": true, 00:14:59.120 "data_offset": 0, 00:14:59.120 "data_size": 65536 00:14:59.120 }, 00:14:59.120 { 00:14:59.120 "name": "BaseBdev3", 00:14:59.120 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:59.120 "is_configured": true, 00:14:59.120 "data_offset": 0, 00:14:59.120 "data_size": 65536 00:14:59.120 } 00:14:59.120 ] 00:14:59.120 }' 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:59.120 10:43:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.380 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.380 "name": "raid_bdev1", 00:14:59.380 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:59.380 "strip_size_kb": 64, 00:14:59.380 "state": "online", 00:14:59.380 "raid_level": "raid5f", 00:14:59.380 "superblock": false, 00:14:59.380 "num_base_bdevs": 3, 00:14:59.380 "num_base_bdevs_discovered": 3, 00:14:59.380 "num_base_bdevs_operational": 3, 00:14:59.381 "base_bdevs_list": [ 00:14:59.381 { 00:14:59.381 "name": "spare", 00:14:59.381 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:59.381 "is_configured": true, 00:14:59.381 "data_offset": 0, 00:14:59.381 "data_size": 65536 00:14:59.381 }, 00:14:59.381 { 00:14:59.381 "name": "BaseBdev2", 00:14:59.381 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:59.381 "is_configured": true, 00:14:59.381 "data_offset": 0, 00:14:59.381 "data_size": 65536 00:14:59.381 }, 00:14:59.381 { 00:14:59.381 "name": "BaseBdev3", 00:14:59.381 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:59.381 "is_configured": true, 00:14:59.381 "data_offset": 0, 00:14:59.381 "data_size": 65536 00:14:59.381 } 00:14:59.381 ] 00:14:59.381 }' 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.381 "name": "raid_bdev1", 00:14:59.381 "uuid": "fa22191a-d924-4a0c-922f-6736680edea4", 00:14:59.381 "strip_size_kb": 64, 00:14:59.381 "state": "online", 00:14:59.381 "raid_level": "raid5f", 00:14:59.381 "superblock": false, 00:14:59.381 "num_base_bdevs": 3, 00:14:59.381 "num_base_bdevs_discovered": 3, 00:14:59.381 "num_base_bdevs_operational": 3, 00:14:59.381 "base_bdevs_list": [ 00:14:59.381 { 00:14:59.381 "name": "spare", 00:14:59.381 "uuid": "64a1f4af-f747-5be0-af74-677e5421f668", 00:14:59.381 "is_configured": true, 00:14:59.381 "data_offset": 0, 00:14:59.381 "data_size": 65536 00:14:59.381 }, 00:14:59.381 { 00:14:59.381 "name": "BaseBdev2", 00:14:59.381 "uuid": "13f19bcf-2966-52ce-9c72-0daabad8ad15", 00:14:59.381 "is_configured": true, 00:14:59.381 "data_offset": 0, 00:14:59.381 "data_size": 65536 00:14:59.381 }, 00:14:59.381 { 00:14:59.381 "name": "BaseBdev3", 00:14:59.381 "uuid": "eba3ca02-46fb-53f3-b68b-9fd30367d43a", 00:14:59.381 "is_configured": true, 00:14:59.381 "data_offset": 0, 00:14:59.381 "data_size": 65536 00:14:59.381 } 00:14:59.381 ] 00:14:59.381 }' 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.381 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.950 [2024-11-18 10:43:25.629718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.950 [2024-11-18 10:43:25.629751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.950 [2024-11-18 10:43:25.629833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.950 [2024-11-18 10:43:25.629907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.950 [2024-11-18 10:43:25.629928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.950 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:00.210 /dev/nbd0 00:15:00.210 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.210 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.211 1+0 records in 00:15:00.211 1+0 records out 00:15:00.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408655 s, 10.0 MB/s 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.211 10:43:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:00.471 /dev/nbd1 00:15:00.471 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.472 1+0 records in 00:15:00.472 1+0 records out 00:15:00.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376565 s, 10.9 MB/s 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.472 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.732 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81366 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81366 ']' 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81366 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81366 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.000 killing process with pid 81366 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81366' 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81366 00:15:01.000 Received shutdown signal, test time was about 60.000000 seconds 00:15:01.000 00:15:01.000 Latency(us) 00:15:01.000 [2024-11-18T10:43:26.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.000 [2024-11-18T10:43:26.885Z] =================================================================================================================== 00:15:01.000 [2024-11-18T10:43:26.885Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.000 [2024-11-18 10:43:26.820564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.000 10:43:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81366 00:15:01.604 [2024-11-18 10:43:27.186690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:02.543 00:15:02.543 real 0m15.134s 00:15:02.543 user 0m18.565s 00:15:02.543 sys 0m2.081s 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 ************************************ 00:15:02.543 END TEST raid5f_rebuild_test 00:15:02.543 ************************************ 00:15:02.543 10:43:28 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:02.543 10:43:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:02.543 10:43:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.543 10:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 ************************************ 00:15:02.543 START TEST raid5f_rebuild_test_sb 00:15:02.543 ************************************ 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81804 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81804 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81804 ']' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.543 10:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:02.543 Zero copy mechanism will not be used. 00:15:02.543 [2024-11-18 10:43:28.368661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:02.543 [2024-11-18 10:43:28.368766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81804 ] 00:15:02.802 [2024-11-18 10:43:28.549026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.802 [2024-11-18 10:43:28.654241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.061 [2024-11-18 10:43:28.844131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.062 [2024-11-18 10:43:28.844197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.321 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.321 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:03.321 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.321 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:03.321 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.321 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 BaseBdev1_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 [2024-11-18 10:43:29.240364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.580 [2024-11-18 10:43:29.240437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.580 [2024-11-18 10:43:29.240459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.580 [2024-11-18 10:43:29.240470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.580 [2024-11-18 10:43:29.242361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.580 [2024-11-18 10:43:29.242399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.580 BaseBdev1 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 BaseBdev2_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 [2024-11-18 10:43:29.293795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:03.580 [2024-11-18 10:43:29.293851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.580 [2024-11-18 10:43:29.293868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.580 [2024-11-18 10:43:29.293880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.580 [2024-11-18 10:43:29.295723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.580 [2024-11-18 10:43:29.295762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.580 BaseBdev2 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 BaseBdev3_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 [2024-11-18 10:43:29.376586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:03.580 [2024-11-18 10:43:29.376637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.580 [2024-11-18 10:43:29.376657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:03.580 [2024-11-18 10:43:29.376667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.580 [2024-11-18 10:43:29.378642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.580 [2024-11-18 10:43:29.378684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:03.580 BaseBdev3 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 spare_malloc 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 spare_delay 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.580 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 [2024-11-18 10:43:29.438020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.580 [2024-11-18 10:43:29.438068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.580 [2024-11-18 10:43:29.438084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.580 [2024-11-18 10:43:29.438094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.580 [2024-11-18 10:43:29.439992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.580 [2024-11-18 10:43:29.440035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.581 spare 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.581 [2024-11-18 10:43:29.450066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.581 [2024-11-18 10:43:29.451685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.581 [2024-11-18 10:43:29.451747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.581 [2024-11-18 10:43:29.451898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.581 [2024-11-18 10:43:29.451919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.581 [2024-11-18 10:43:29.452143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:03.581 [2024-11-18 10:43:29.456660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.581 [2024-11-18 10:43:29.456686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.581 [2024-11-18 10:43:29.456863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.581 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.840 "name": "raid_bdev1", 00:15:03.840 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:03.840 "strip_size_kb": 64, 00:15:03.840 "state": "online", 00:15:03.840 "raid_level": "raid5f", 00:15:03.840 "superblock": true, 00:15:03.840 "num_base_bdevs": 3, 00:15:03.840 "num_base_bdevs_discovered": 3, 00:15:03.840 "num_base_bdevs_operational": 3, 00:15:03.840 "base_bdevs_list": [ 00:15:03.840 { 00:15:03.840 "name": "BaseBdev1", 00:15:03.840 "uuid": "7aa4122a-57b4-5ff7-bd26-7453ded18778", 00:15:03.840 "is_configured": true, 00:15:03.840 "data_offset": 2048, 00:15:03.840 "data_size": 63488 00:15:03.840 }, 00:15:03.840 { 00:15:03.840 "name": "BaseBdev2", 00:15:03.840 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:03.840 "is_configured": true, 00:15:03.840 "data_offset": 2048, 00:15:03.840 "data_size": 63488 00:15:03.840 }, 00:15:03.840 { 00:15:03.840 "name": "BaseBdev3", 00:15:03.840 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:03.840 "is_configured": true, 00:15:03.840 "data_offset": 2048, 00:15:03.840 "data_size": 63488 00:15:03.840 } 00:15:03.840 ] 00:15:03.840 }' 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.840 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.099 [2024-11-18 10:43:29.942237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.099 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.359 10:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.359 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:04.359 [2024-11-18 10:43:30.193641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:04.359 /dev/nbd0 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.620 1+0 records in 00:15:04.620 1+0 records out 00:15:04.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037486 s, 10.9 MB/s 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:04.620 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:05.189 496+0 records in 00:15:05.189 496+0 records out 00:15:05.189 65011712 bytes (65 MB, 62 MiB) copied, 0.519491 s, 125 MB/s 00:15:05.189 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.190 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.190 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.190 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.190 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:05.190 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.190 10:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.190 [2024-11-18 10:43:31.007483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.190 [2024-11-18 10:43:31.021502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.190 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.449 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.449 "name": "raid_bdev1", 00:15:05.449 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:05.449 "strip_size_kb": 64, 00:15:05.449 "state": "online", 00:15:05.449 "raid_level": "raid5f", 00:15:05.449 "superblock": true, 00:15:05.449 "num_base_bdevs": 3, 00:15:05.449 "num_base_bdevs_discovered": 2, 00:15:05.449 "num_base_bdevs_operational": 2, 00:15:05.449 "base_bdevs_list": [ 00:15:05.449 { 00:15:05.449 "name": null, 00:15:05.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.449 "is_configured": false, 00:15:05.449 "data_offset": 0, 00:15:05.450 "data_size": 63488 00:15:05.450 }, 00:15:05.450 { 00:15:05.450 "name": "BaseBdev2", 00:15:05.450 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:05.450 "is_configured": true, 00:15:05.450 "data_offset": 2048, 00:15:05.450 "data_size": 63488 00:15:05.450 }, 00:15:05.450 { 00:15:05.450 "name": "BaseBdev3", 00:15:05.450 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:05.450 "is_configured": true, 00:15:05.450 "data_offset": 2048, 00:15:05.450 "data_size": 63488 00:15:05.450 } 00:15:05.450 ] 00:15:05.450 }' 00:15:05.450 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.450 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.709 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.709 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.709 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.709 [2024-11-18 10:43:31.500574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.709 [2024-11-18 10:43:31.517437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:05.709 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.709 10:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:05.709 [2024-11-18 10:43:31.524862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.648 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.907 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.907 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.907 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.907 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.907 "name": "raid_bdev1", 00:15:06.907 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:06.907 "strip_size_kb": 64, 00:15:06.907 "state": "online", 00:15:06.907 "raid_level": "raid5f", 00:15:06.908 "superblock": true, 00:15:06.908 "num_base_bdevs": 3, 00:15:06.908 "num_base_bdevs_discovered": 3, 00:15:06.908 "num_base_bdevs_operational": 3, 00:15:06.908 "process": { 00:15:06.908 "type": "rebuild", 00:15:06.908 "target": "spare", 00:15:06.908 "progress": { 00:15:06.908 "blocks": 20480, 00:15:06.908 "percent": 16 00:15:06.908 } 00:15:06.908 }, 00:15:06.908 "base_bdevs_list": [ 00:15:06.908 { 00:15:06.908 "name": "spare", 00:15:06.908 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:06.908 "is_configured": true, 00:15:06.908 "data_offset": 2048, 00:15:06.908 "data_size": 63488 00:15:06.908 }, 00:15:06.908 { 00:15:06.908 "name": "BaseBdev2", 00:15:06.908 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:06.908 "is_configured": true, 00:15:06.908 "data_offset": 2048, 00:15:06.908 "data_size": 63488 00:15:06.908 }, 00:15:06.908 { 00:15:06.908 "name": "BaseBdev3", 00:15:06.908 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:06.908 "is_configured": true, 00:15:06.908 "data_offset": 2048, 00:15:06.908 "data_size": 63488 00:15:06.908 } 00:15:06.908 ] 00:15:06.908 }' 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.908 [2024-11-18 10:43:32.676149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.908 [2024-11-18 10:43:32.734007] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.908 [2024-11-18 10:43:32.734063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.908 [2024-11-18 10:43:32.734082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.908 [2024-11-18 10:43:32.734090] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.908 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.167 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.167 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.167 "name": "raid_bdev1", 00:15:07.167 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:07.167 "strip_size_kb": 64, 00:15:07.167 "state": "online", 00:15:07.167 "raid_level": "raid5f", 00:15:07.168 "superblock": true, 00:15:07.168 "num_base_bdevs": 3, 00:15:07.168 "num_base_bdevs_discovered": 2, 00:15:07.168 "num_base_bdevs_operational": 2, 00:15:07.168 "base_bdevs_list": [ 00:15:07.168 { 00:15:07.168 "name": null, 00:15:07.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.168 "is_configured": false, 00:15:07.168 "data_offset": 0, 00:15:07.168 "data_size": 63488 00:15:07.168 }, 00:15:07.168 { 00:15:07.168 "name": "BaseBdev2", 00:15:07.168 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:07.168 "is_configured": true, 00:15:07.168 "data_offset": 2048, 00:15:07.168 "data_size": 63488 00:15:07.168 }, 00:15:07.168 { 00:15:07.168 "name": "BaseBdev3", 00:15:07.168 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:07.168 "is_configured": true, 00:15:07.168 "data_offset": 2048, 00:15:07.168 "data_size": 63488 00:15:07.168 } 00:15:07.168 ] 00:15:07.168 }' 00:15:07.168 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.168 10:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.428 "name": "raid_bdev1", 00:15:07.428 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:07.428 "strip_size_kb": 64, 00:15:07.428 "state": "online", 00:15:07.428 "raid_level": "raid5f", 00:15:07.428 "superblock": true, 00:15:07.428 "num_base_bdevs": 3, 00:15:07.428 "num_base_bdevs_discovered": 2, 00:15:07.428 "num_base_bdevs_operational": 2, 00:15:07.428 "base_bdevs_list": [ 00:15:07.428 { 00:15:07.428 "name": null, 00:15:07.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.428 "is_configured": false, 00:15:07.428 "data_offset": 0, 00:15:07.428 "data_size": 63488 00:15:07.428 }, 00:15:07.428 { 00:15:07.428 "name": "BaseBdev2", 00:15:07.428 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:07.428 "is_configured": true, 00:15:07.428 "data_offset": 2048, 00:15:07.428 "data_size": 63488 00:15:07.428 }, 00:15:07.428 { 00:15:07.428 "name": "BaseBdev3", 00:15:07.428 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:07.428 "is_configured": true, 00:15:07.428 "data_offset": 2048, 00:15:07.428 "data_size": 63488 00:15:07.428 } 00:15:07.428 ] 00:15:07.428 }' 00:15:07.428 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.688 [2024-11-18 10:43:33.361432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.688 [2024-11-18 10:43:33.377828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.688 10:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:07.688 [2024-11-18 10:43:33.385263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.627 "name": "raid_bdev1", 00:15:08.627 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:08.627 "strip_size_kb": 64, 00:15:08.627 "state": "online", 00:15:08.627 "raid_level": "raid5f", 00:15:08.627 "superblock": true, 00:15:08.627 "num_base_bdevs": 3, 00:15:08.627 "num_base_bdevs_discovered": 3, 00:15:08.627 "num_base_bdevs_operational": 3, 00:15:08.627 "process": { 00:15:08.627 "type": "rebuild", 00:15:08.627 "target": "spare", 00:15:08.627 "progress": { 00:15:08.627 "blocks": 20480, 00:15:08.627 "percent": 16 00:15:08.627 } 00:15:08.627 }, 00:15:08.627 "base_bdevs_list": [ 00:15:08.627 { 00:15:08.627 "name": "spare", 00:15:08.627 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:08.627 "is_configured": true, 00:15:08.627 "data_offset": 2048, 00:15:08.627 "data_size": 63488 00:15:08.627 }, 00:15:08.627 { 00:15:08.627 "name": "BaseBdev2", 00:15:08.627 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:08.627 "is_configured": true, 00:15:08.627 "data_offset": 2048, 00:15:08.627 "data_size": 63488 00:15:08.627 }, 00:15:08.627 { 00:15:08.627 "name": "BaseBdev3", 00:15:08.627 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:08.627 "is_configured": true, 00:15:08.627 "data_offset": 2048, 00:15:08.627 "data_size": 63488 00:15:08.627 } 00:15:08.627 ] 00:15:08.627 }' 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.627 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.886 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:08.887 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=556 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.887 "name": "raid_bdev1", 00:15:08.887 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:08.887 "strip_size_kb": 64, 00:15:08.887 "state": "online", 00:15:08.887 "raid_level": "raid5f", 00:15:08.887 "superblock": true, 00:15:08.887 "num_base_bdevs": 3, 00:15:08.887 "num_base_bdevs_discovered": 3, 00:15:08.887 "num_base_bdevs_operational": 3, 00:15:08.887 "process": { 00:15:08.887 "type": "rebuild", 00:15:08.887 "target": "spare", 00:15:08.887 "progress": { 00:15:08.887 "blocks": 22528, 00:15:08.887 "percent": 17 00:15:08.887 } 00:15:08.887 }, 00:15:08.887 "base_bdevs_list": [ 00:15:08.887 { 00:15:08.887 "name": "spare", 00:15:08.887 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:08.887 "is_configured": true, 00:15:08.887 "data_offset": 2048, 00:15:08.887 "data_size": 63488 00:15:08.887 }, 00:15:08.887 { 00:15:08.887 "name": "BaseBdev2", 00:15:08.887 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:08.887 "is_configured": true, 00:15:08.887 "data_offset": 2048, 00:15:08.887 "data_size": 63488 00:15:08.887 }, 00:15:08.887 { 00:15:08.887 "name": "BaseBdev3", 00:15:08.887 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:08.887 "is_configured": true, 00:15:08.887 "data_offset": 2048, 00:15:08.887 "data_size": 63488 00:15:08.887 } 00:15:08.887 ] 00:15:08.887 }' 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.887 10:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.825 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.085 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.085 "name": "raid_bdev1", 00:15:10.085 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:10.085 "strip_size_kb": 64, 00:15:10.085 "state": "online", 00:15:10.085 "raid_level": "raid5f", 00:15:10.085 "superblock": true, 00:15:10.085 "num_base_bdevs": 3, 00:15:10.085 "num_base_bdevs_discovered": 3, 00:15:10.085 "num_base_bdevs_operational": 3, 00:15:10.085 "process": { 00:15:10.085 "type": "rebuild", 00:15:10.085 "target": "spare", 00:15:10.085 "progress": { 00:15:10.085 "blocks": 45056, 00:15:10.085 "percent": 35 00:15:10.085 } 00:15:10.085 }, 00:15:10.085 "base_bdevs_list": [ 00:15:10.085 { 00:15:10.085 "name": "spare", 00:15:10.085 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:10.085 "is_configured": true, 00:15:10.085 "data_offset": 2048, 00:15:10.085 "data_size": 63488 00:15:10.085 }, 00:15:10.085 { 00:15:10.085 "name": "BaseBdev2", 00:15:10.085 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:10.085 "is_configured": true, 00:15:10.085 "data_offset": 2048, 00:15:10.085 "data_size": 63488 00:15:10.085 }, 00:15:10.085 { 00:15:10.085 "name": "BaseBdev3", 00:15:10.085 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:10.085 "is_configured": true, 00:15:10.085 "data_offset": 2048, 00:15:10.085 "data_size": 63488 00:15:10.085 } 00:15:10.085 ] 00:15:10.085 }' 00:15:10.085 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.085 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.085 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.085 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.085 10:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.023 "name": "raid_bdev1", 00:15:11.023 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:11.023 "strip_size_kb": 64, 00:15:11.023 "state": "online", 00:15:11.023 "raid_level": "raid5f", 00:15:11.023 "superblock": true, 00:15:11.023 "num_base_bdevs": 3, 00:15:11.023 "num_base_bdevs_discovered": 3, 00:15:11.023 "num_base_bdevs_operational": 3, 00:15:11.023 "process": { 00:15:11.023 "type": "rebuild", 00:15:11.023 "target": "spare", 00:15:11.023 "progress": { 00:15:11.023 "blocks": 69632, 00:15:11.023 "percent": 54 00:15:11.023 } 00:15:11.023 }, 00:15:11.023 "base_bdevs_list": [ 00:15:11.023 { 00:15:11.023 "name": "spare", 00:15:11.023 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:11.023 "is_configured": true, 00:15:11.023 "data_offset": 2048, 00:15:11.023 "data_size": 63488 00:15:11.023 }, 00:15:11.023 { 00:15:11.023 "name": "BaseBdev2", 00:15:11.023 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:11.023 "is_configured": true, 00:15:11.023 "data_offset": 2048, 00:15:11.023 "data_size": 63488 00:15:11.023 }, 00:15:11.023 { 00:15:11.023 "name": "BaseBdev3", 00:15:11.023 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:11.023 "is_configured": true, 00:15:11.023 "data_offset": 2048, 00:15:11.023 "data_size": 63488 00:15:11.023 } 00:15:11.023 ] 00:15:11.023 }' 00:15:11.023 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.282 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.282 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.282 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.282 10:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.221 10:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.221 10:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.221 "name": "raid_bdev1", 00:15:12.221 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:12.221 "strip_size_kb": 64, 00:15:12.221 "state": "online", 00:15:12.221 "raid_level": "raid5f", 00:15:12.221 "superblock": true, 00:15:12.221 "num_base_bdevs": 3, 00:15:12.221 "num_base_bdevs_discovered": 3, 00:15:12.221 "num_base_bdevs_operational": 3, 00:15:12.221 "process": { 00:15:12.221 "type": "rebuild", 00:15:12.221 "target": "spare", 00:15:12.221 "progress": { 00:15:12.221 "blocks": 92160, 00:15:12.221 "percent": 72 00:15:12.221 } 00:15:12.221 }, 00:15:12.221 "base_bdevs_list": [ 00:15:12.221 { 00:15:12.221 "name": "spare", 00:15:12.221 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:12.221 "is_configured": true, 00:15:12.221 "data_offset": 2048, 00:15:12.221 "data_size": 63488 00:15:12.221 }, 00:15:12.221 { 00:15:12.221 "name": "BaseBdev2", 00:15:12.221 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:12.221 "is_configured": true, 00:15:12.221 "data_offset": 2048, 00:15:12.221 "data_size": 63488 00:15:12.221 }, 00:15:12.221 { 00:15:12.221 "name": "BaseBdev3", 00:15:12.221 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:12.221 "is_configured": true, 00:15:12.221 "data_offset": 2048, 00:15:12.221 "data_size": 63488 00:15:12.221 } 00:15:12.221 ] 00:15:12.221 }' 00:15:12.221 10:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.221 10:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.221 10:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.481 10:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.481 10:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.426 "name": "raid_bdev1", 00:15:13.426 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:13.426 "strip_size_kb": 64, 00:15:13.426 "state": "online", 00:15:13.426 "raid_level": "raid5f", 00:15:13.426 "superblock": true, 00:15:13.426 "num_base_bdevs": 3, 00:15:13.426 "num_base_bdevs_discovered": 3, 00:15:13.426 "num_base_bdevs_operational": 3, 00:15:13.426 "process": { 00:15:13.426 "type": "rebuild", 00:15:13.426 "target": "spare", 00:15:13.426 "progress": { 00:15:13.426 "blocks": 116736, 00:15:13.426 "percent": 91 00:15:13.426 } 00:15:13.426 }, 00:15:13.426 "base_bdevs_list": [ 00:15:13.426 { 00:15:13.426 "name": "spare", 00:15:13.426 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:13.426 "is_configured": true, 00:15:13.426 "data_offset": 2048, 00:15:13.426 "data_size": 63488 00:15:13.426 }, 00:15:13.426 { 00:15:13.426 "name": "BaseBdev2", 00:15:13.426 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:13.426 "is_configured": true, 00:15:13.426 "data_offset": 2048, 00:15:13.426 "data_size": 63488 00:15:13.426 }, 00:15:13.426 { 00:15:13.426 "name": "BaseBdev3", 00:15:13.426 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:13.426 "is_configured": true, 00:15:13.426 "data_offset": 2048, 00:15:13.426 "data_size": 63488 00:15:13.426 } 00:15:13.426 ] 00:15:13.426 }' 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.426 10:43:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.998 [2024-11-18 10:43:39.626289] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:13.998 [2024-11-18 10:43:39.626368] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:13.998 [2024-11-18 10:43:39.626486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.567 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.568 "name": "raid_bdev1", 00:15:14.568 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:14.568 "strip_size_kb": 64, 00:15:14.568 "state": "online", 00:15:14.568 "raid_level": "raid5f", 00:15:14.568 "superblock": true, 00:15:14.568 "num_base_bdevs": 3, 00:15:14.568 "num_base_bdevs_discovered": 3, 00:15:14.568 "num_base_bdevs_operational": 3, 00:15:14.568 "base_bdevs_list": [ 00:15:14.568 { 00:15:14.568 "name": "spare", 00:15:14.568 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:14.568 "is_configured": true, 00:15:14.568 "data_offset": 2048, 00:15:14.568 "data_size": 63488 00:15:14.568 }, 00:15:14.568 { 00:15:14.568 "name": "BaseBdev2", 00:15:14.568 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:14.568 "is_configured": true, 00:15:14.568 "data_offset": 2048, 00:15:14.568 "data_size": 63488 00:15:14.568 }, 00:15:14.568 { 00:15:14.568 "name": "BaseBdev3", 00:15:14.568 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:14.568 "is_configured": true, 00:15:14.568 "data_offset": 2048, 00:15:14.568 "data_size": 63488 00:15:14.568 } 00:15:14.568 ] 00:15:14.568 }' 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.568 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.828 "name": "raid_bdev1", 00:15:14.828 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:14.828 "strip_size_kb": 64, 00:15:14.828 "state": "online", 00:15:14.828 "raid_level": "raid5f", 00:15:14.828 "superblock": true, 00:15:14.828 "num_base_bdevs": 3, 00:15:14.828 "num_base_bdevs_discovered": 3, 00:15:14.828 "num_base_bdevs_operational": 3, 00:15:14.828 "base_bdevs_list": [ 00:15:14.828 { 00:15:14.828 "name": "spare", 00:15:14.828 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev2", 00:15:14.828 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev3", 00:15:14.828 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 } 00:15:14.828 ] 00:15:14.828 }' 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.828 "name": "raid_bdev1", 00:15:14.828 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:14.828 "strip_size_kb": 64, 00:15:14.828 "state": "online", 00:15:14.828 "raid_level": "raid5f", 00:15:14.828 "superblock": true, 00:15:14.828 "num_base_bdevs": 3, 00:15:14.828 "num_base_bdevs_discovered": 3, 00:15:14.828 "num_base_bdevs_operational": 3, 00:15:14.828 "base_bdevs_list": [ 00:15:14.828 { 00:15:14.828 "name": "spare", 00:15:14.828 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev2", 00:15:14.828 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev3", 00:15:14.828 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 } 00:15:14.828 ] 00:15:14.828 }' 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.828 10:43:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.398 [2024-11-18 10:43:41.036737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.398 [2024-11-18 10:43:41.036771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.398 [2024-11-18 10:43:41.036865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.398 [2024-11-18 10:43:41.036951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.398 [2024-11-18 10:43:41.036975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.398 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:15.658 /dev/nbd0 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.658 1+0 records in 00:15:15.658 1+0 records out 00:15:15.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404743 s, 10.1 MB/s 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.658 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:15.918 /dev/nbd1 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.918 1+0 records in 00:15:15.918 1+0 records out 00:15:15.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514218 s, 8.0 MB/s 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.918 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.178 10:43:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 [2024-11-18 10:43:42.218109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.439 [2024-11-18 10:43:42.218189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.439 [2024-11-18 10:43:42.218213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:16.439 [2024-11-18 10:43:42.218224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.439 [2024-11-18 10:43:42.220417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.439 [2024-11-18 10:43:42.220461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.439 [2024-11-18 10:43:42.220553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:16.439 [2024-11-18 10:43:42.220616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.439 [2024-11-18 10:43:42.220740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.439 [2024-11-18 10:43:42.220846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.439 spare 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 [2024-11-18 10:43:42.320732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:16.439 [2024-11-18 10:43:42.320764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.439 [2024-11-18 10:43:42.321010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:16.698 [2024-11-18 10:43:42.325669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:16.698 [2024-11-18 10:43:42.325693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:16.698 [2024-11-18 10:43:42.325855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.698 "name": "raid_bdev1", 00:15:16.698 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:16.698 "strip_size_kb": 64, 00:15:16.698 "state": "online", 00:15:16.698 "raid_level": "raid5f", 00:15:16.698 "superblock": true, 00:15:16.698 "num_base_bdevs": 3, 00:15:16.698 "num_base_bdevs_discovered": 3, 00:15:16.698 "num_base_bdevs_operational": 3, 00:15:16.698 "base_bdevs_list": [ 00:15:16.698 { 00:15:16.698 "name": "spare", 00:15:16.698 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:16.698 "is_configured": true, 00:15:16.698 "data_offset": 2048, 00:15:16.698 "data_size": 63488 00:15:16.698 }, 00:15:16.698 { 00:15:16.698 "name": "BaseBdev2", 00:15:16.698 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:16.698 "is_configured": true, 00:15:16.698 "data_offset": 2048, 00:15:16.698 "data_size": 63488 00:15:16.698 }, 00:15:16.698 { 00:15:16.698 "name": "BaseBdev3", 00:15:16.698 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:16.698 "is_configured": true, 00:15:16.698 "data_offset": 2048, 00:15:16.698 "data_size": 63488 00:15:16.698 } 00:15:16.698 ] 00:15:16.698 }' 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.698 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.957 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.957 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.958 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.217 "name": "raid_bdev1", 00:15:17.217 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:17.217 "strip_size_kb": 64, 00:15:17.217 "state": "online", 00:15:17.217 "raid_level": "raid5f", 00:15:17.217 "superblock": true, 00:15:17.217 "num_base_bdevs": 3, 00:15:17.217 "num_base_bdevs_discovered": 3, 00:15:17.217 "num_base_bdevs_operational": 3, 00:15:17.217 "base_bdevs_list": [ 00:15:17.217 { 00:15:17.217 "name": "spare", 00:15:17.217 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:17.217 "is_configured": true, 00:15:17.217 "data_offset": 2048, 00:15:17.217 "data_size": 63488 00:15:17.217 }, 00:15:17.217 { 00:15:17.217 "name": "BaseBdev2", 00:15:17.217 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:17.217 "is_configured": true, 00:15:17.217 "data_offset": 2048, 00:15:17.217 "data_size": 63488 00:15:17.217 }, 00:15:17.217 { 00:15:17.217 "name": "BaseBdev3", 00:15:17.217 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:17.217 "is_configured": true, 00:15:17.217 "data_offset": 2048, 00:15:17.217 "data_size": 63488 00:15:17.217 } 00:15:17.217 ] 00:15:17.217 }' 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.217 10:43:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.217 [2024-11-18 10:43:42.998862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.217 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.217 "name": "raid_bdev1", 00:15:17.217 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:17.217 "strip_size_kb": 64, 00:15:17.217 "state": "online", 00:15:17.217 "raid_level": "raid5f", 00:15:17.217 "superblock": true, 00:15:17.217 "num_base_bdevs": 3, 00:15:17.217 "num_base_bdevs_discovered": 2, 00:15:17.217 "num_base_bdevs_operational": 2, 00:15:17.217 "base_bdevs_list": [ 00:15:17.217 { 00:15:17.217 "name": null, 00:15:17.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.217 "is_configured": false, 00:15:17.217 "data_offset": 0, 00:15:17.217 "data_size": 63488 00:15:17.217 }, 00:15:17.217 { 00:15:17.217 "name": "BaseBdev2", 00:15:17.217 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:17.217 "is_configured": true, 00:15:17.217 "data_offset": 2048, 00:15:17.217 "data_size": 63488 00:15:17.217 }, 00:15:17.217 { 00:15:17.217 "name": "BaseBdev3", 00:15:17.217 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:17.217 "is_configured": true, 00:15:17.217 "data_offset": 2048, 00:15:17.217 "data_size": 63488 00:15:17.218 } 00:15:17.218 ] 00:15:17.218 }' 00:15:17.218 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.218 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.786 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.786 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.786 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.786 [2024-11-18 10:43:43.498040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.786 [2024-11-18 10:43:43.498211] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.786 [2024-11-18 10:43:43.498230] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:17.786 [2024-11-18 10:43:43.498262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.786 [2024-11-18 10:43:43.513158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:17.786 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.786 10:43:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:17.786 [2024-11-18 10:43:43.519668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.741 "name": "raid_bdev1", 00:15:18.741 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:18.741 "strip_size_kb": 64, 00:15:18.741 "state": "online", 00:15:18.741 "raid_level": "raid5f", 00:15:18.741 "superblock": true, 00:15:18.741 "num_base_bdevs": 3, 00:15:18.741 "num_base_bdevs_discovered": 3, 00:15:18.741 "num_base_bdevs_operational": 3, 00:15:18.741 "process": { 00:15:18.741 "type": "rebuild", 00:15:18.741 "target": "spare", 00:15:18.741 "progress": { 00:15:18.741 "blocks": 20480, 00:15:18.741 "percent": 16 00:15:18.741 } 00:15:18.741 }, 00:15:18.741 "base_bdevs_list": [ 00:15:18.741 { 00:15:18.741 "name": "spare", 00:15:18.741 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:18.741 "is_configured": true, 00:15:18.741 "data_offset": 2048, 00:15:18.741 "data_size": 63488 00:15:18.741 }, 00:15:18.741 { 00:15:18.741 "name": "BaseBdev2", 00:15:18.741 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:18.741 "is_configured": true, 00:15:18.741 "data_offset": 2048, 00:15:18.741 "data_size": 63488 00:15:18.741 }, 00:15:18.741 { 00:15:18.741 "name": "BaseBdev3", 00:15:18.741 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:18.741 "is_configured": true, 00:15:18.741 "data_offset": 2048, 00:15:18.741 "data_size": 63488 00:15:18.741 } 00:15:18.741 ] 00:15:18.741 }' 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.741 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.016 [2024-11-18 10:43:44.626879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.016 [2024-11-18 10:43:44.627021] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.016 [2024-11-18 10:43:44.627067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.016 [2024-11-18 10:43:44.627080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.016 [2024-11-18 10:43:44.627089] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.016 "name": "raid_bdev1", 00:15:19.016 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:19.016 "strip_size_kb": 64, 00:15:19.016 "state": "online", 00:15:19.016 "raid_level": "raid5f", 00:15:19.016 "superblock": true, 00:15:19.016 "num_base_bdevs": 3, 00:15:19.016 "num_base_bdevs_discovered": 2, 00:15:19.016 "num_base_bdevs_operational": 2, 00:15:19.016 "base_bdevs_list": [ 00:15:19.016 { 00:15:19.016 "name": null, 00:15:19.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.016 "is_configured": false, 00:15:19.016 "data_offset": 0, 00:15:19.016 "data_size": 63488 00:15:19.016 }, 00:15:19.016 { 00:15:19.016 "name": "BaseBdev2", 00:15:19.016 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:19.016 "is_configured": true, 00:15:19.016 "data_offset": 2048, 00:15:19.016 "data_size": 63488 00:15:19.016 }, 00:15:19.016 { 00:15:19.016 "name": "BaseBdev3", 00:15:19.016 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:19.016 "is_configured": true, 00:15:19.016 "data_offset": 2048, 00:15:19.016 "data_size": 63488 00:15:19.016 } 00:15:19.016 ] 00:15:19.016 }' 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.016 10:43:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.276 10:43:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:19.276 10:43:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.276 10:43:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.276 [2024-11-18 10:43:45.143260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.276 [2024-11-18 10:43:45.143316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.276 [2024-11-18 10:43:45.143334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:19.276 [2024-11-18 10:43:45.143347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.276 [2024-11-18 10:43:45.143766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.276 [2024-11-18 10:43:45.143795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.276 [2024-11-18 10:43:45.143874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:19.276 [2024-11-18 10:43:45.143893] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.276 [2024-11-18 10:43:45.143903] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:19.276 [2024-11-18 10:43:45.143926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.276 [2024-11-18 10:43:45.157678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:19.535 spare 00:15:19.535 10:43:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.535 10:43:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:19.535 [2024-11-18 10:43:45.164053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.475 "name": "raid_bdev1", 00:15:20.475 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:20.475 "strip_size_kb": 64, 00:15:20.475 "state": "online", 00:15:20.475 "raid_level": "raid5f", 00:15:20.475 "superblock": true, 00:15:20.475 "num_base_bdevs": 3, 00:15:20.475 "num_base_bdevs_discovered": 3, 00:15:20.475 "num_base_bdevs_operational": 3, 00:15:20.475 "process": { 00:15:20.475 "type": "rebuild", 00:15:20.475 "target": "spare", 00:15:20.475 "progress": { 00:15:20.475 "blocks": 20480, 00:15:20.475 "percent": 16 00:15:20.475 } 00:15:20.475 }, 00:15:20.475 "base_bdevs_list": [ 00:15:20.475 { 00:15:20.475 "name": "spare", 00:15:20.475 "uuid": "a20b7922-a320-5393-8339-f9518a63fa87", 00:15:20.475 "is_configured": true, 00:15:20.475 "data_offset": 2048, 00:15:20.475 "data_size": 63488 00:15:20.475 }, 00:15:20.475 { 00:15:20.475 "name": "BaseBdev2", 00:15:20.475 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:20.475 "is_configured": true, 00:15:20.475 "data_offset": 2048, 00:15:20.475 "data_size": 63488 00:15:20.475 }, 00:15:20.475 { 00:15:20.475 "name": "BaseBdev3", 00:15:20.475 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:20.475 "is_configured": true, 00:15:20.475 "data_offset": 2048, 00:15:20.475 "data_size": 63488 00:15:20.475 } 00:15:20.475 ] 00:15:20.475 }' 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.475 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.475 [2024-11-18 10:43:46.326882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.735 [2024-11-18 10:43:46.370956] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:20.735 [2024-11-18 10:43:46.371005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.736 [2024-11-18 10:43:46.371022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.736 [2024-11-18 10:43:46.371028] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.736 "name": "raid_bdev1", 00:15:20.736 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:20.736 "strip_size_kb": 64, 00:15:20.736 "state": "online", 00:15:20.736 "raid_level": "raid5f", 00:15:20.736 "superblock": true, 00:15:20.736 "num_base_bdevs": 3, 00:15:20.736 "num_base_bdevs_discovered": 2, 00:15:20.736 "num_base_bdevs_operational": 2, 00:15:20.736 "base_bdevs_list": [ 00:15:20.736 { 00:15:20.736 "name": null, 00:15:20.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.736 "is_configured": false, 00:15:20.736 "data_offset": 0, 00:15:20.736 "data_size": 63488 00:15:20.736 }, 00:15:20.736 { 00:15:20.736 "name": "BaseBdev2", 00:15:20.736 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:20.736 "is_configured": true, 00:15:20.736 "data_offset": 2048, 00:15:20.736 "data_size": 63488 00:15:20.736 }, 00:15:20.736 { 00:15:20.736 "name": "BaseBdev3", 00:15:20.736 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:20.736 "is_configured": true, 00:15:20.736 "data_offset": 2048, 00:15:20.736 "data_size": 63488 00:15:20.736 } 00:15:20.736 ] 00:15:20.736 }' 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.736 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.308 "name": "raid_bdev1", 00:15:21.308 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:21.308 "strip_size_kb": 64, 00:15:21.308 "state": "online", 00:15:21.308 "raid_level": "raid5f", 00:15:21.308 "superblock": true, 00:15:21.308 "num_base_bdevs": 3, 00:15:21.308 "num_base_bdevs_discovered": 2, 00:15:21.308 "num_base_bdevs_operational": 2, 00:15:21.308 "base_bdevs_list": [ 00:15:21.308 { 00:15:21.308 "name": null, 00:15:21.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.308 "is_configured": false, 00:15:21.308 "data_offset": 0, 00:15:21.308 "data_size": 63488 00:15:21.308 }, 00:15:21.308 { 00:15:21.308 "name": "BaseBdev2", 00:15:21.308 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:21.308 "is_configured": true, 00:15:21.308 "data_offset": 2048, 00:15:21.308 "data_size": 63488 00:15:21.308 }, 00:15:21.308 { 00:15:21.308 "name": "BaseBdev3", 00:15:21.308 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:21.308 "is_configured": true, 00:15:21.308 "data_offset": 2048, 00:15:21.308 "data_size": 63488 00:15:21.308 } 00:15:21.308 ] 00:15:21.308 }' 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.308 10:43:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.308 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.308 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.309 [2024-11-18 10:43:47.055100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:21.309 [2024-11-18 10:43:47.055159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.309 [2024-11-18 10:43:47.055191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:21.309 [2024-11-18 10:43:47.055201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.309 [2024-11-18 10:43:47.055612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.309 [2024-11-18 10:43:47.055638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:21.309 [2024-11-18 10:43:47.055713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:21.309 [2024-11-18 10:43:47.055732] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:21.309 [2024-11-18 10:43:47.055742] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:21.309 [2024-11-18 10:43:47.055760] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:21.309 BaseBdev1 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.309 10:43:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.251 "name": "raid_bdev1", 00:15:22.251 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:22.251 "strip_size_kb": 64, 00:15:22.251 "state": "online", 00:15:22.251 "raid_level": "raid5f", 00:15:22.251 "superblock": true, 00:15:22.251 "num_base_bdevs": 3, 00:15:22.251 "num_base_bdevs_discovered": 2, 00:15:22.251 "num_base_bdevs_operational": 2, 00:15:22.251 "base_bdevs_list": [ 00:15:22.251 { 00:15:22.251 "name": null, 00:15:22.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.251 "is_configured": false, 00:15:22.251 "data_offset": 0, 00:15:22.251 "data_size": 63488 00:15:22.251 }, 00:15:22.251 { 00:15:22.251 "name": "BaseBdev2", 00:15:22.251 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:22.251 "is_configured": true, 00:15:22.251 "data_offset": 2048, 00:15:22.251 "data_size": 63488 00:15:22.251 }, 00:15:22.251 { 00:15:22.251 "name": "BaseBdev3", 00:15:22.251 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:22.251 "is_configured": true, 00:15:22.251 "data_offset": 2048, 00:15:22.251 "data_size": 63488 00:15:22.251 } 00:15:22.251 ] 00:15:22.251 }' 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.251 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.822 "name": "raid_bdev1", 00:15:22.822 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:22.822 "strip_size_kb": 64, 00:15:22.822 "state": "online", 00:15:22.822 "raid_level": "raid5f", 00:15:22.822 "superblock": true, 00:15:22.822 "num_base_bdevs": 3, 00:15:22.822 "num_base_bdevs_discovered": 2, 00:15:22.822 "num_base_bdevs_operational": 2, 00:15:22.822 "base_bdevs_list": [ 00:15:22.822 { 00:15:22.822 "name": null, 00:15:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.822 "is_configured": false, 00:15:22.822 "data_offset": 0, 00:15:22.822 "data_size": 63488 00:15:22.822 }, 00:15:22.822 { 00:15:22.822 "name": "BaseBdev2", 00:15:22.822 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:22.822 "is_configured": true, 00:15:22.822 "data_offset": 2048, 00:15:22.822 "data_size": 63488 00:15:22.822 }, 00:15:22.822 { 00:15:22.822 "name": "BaseBdev3", 00:15:22.822 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:22.822 "is_configured": true, 00:15:22.822 "data_offset": 2048, 00:15:22.822 "data_size": 63488 00:15:22.822 } 00:15:22.822 ] 00:15:22.822 }' 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.822 [2024-11-18 10:43:48.696521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.822 [2024-11-18 10:43:48.696664] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:22.822 [2024-11-18 10:43:48.696689] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:22.822 request: 00:15:22.822 { 00:15:22.822 "base_bdev": "BaseBdev1", 00:15:22.822 "raid_bdev": "raid_bdev1", 00:15:22.822 "method": "bdev_raid_add_base_bdev", 00:15:22.822 "req_id": 1 00:15:22.822 } 00:15:22.822 Got JSON-RPC error response 00:15:22.822 response: 00:15:22.822 { 00:15:22.822 "code": -22, 00:15:22.822 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:22.822 } 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.822 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.083 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.083 10:43:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.024 "name": "raid_bdev1", 00:15:24.024 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:24.024 "strip_size_kb": 64, 00:15:24.024 "state": "online", 00:15:24.024 "raid_level": "raid5f", 00:15:24.024 "superblock": true, 00:15:24.024 "num_base_bdevs": 3, 00:15:24.024 "num_base_bdevs_discovered": 2, 00:15:24.024 "num_base_bdevs_operational": 2, 00:15:24.024 "base_bdevs_list": [ 00:15:24.024 { 00:15:24.024 "name": null, 00:15:24.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.024 "is_configured": false, 00:15:24.024 "data_offset": 0, 00:15:24.024 "data_size": 63488 00:15:24.024 }, 00:15:24.024 { 00:15:24.024 "name": "BaseBdev2", 00:15:24.024 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:24.024 "is_configured": true, 00:15:24.024 "data_offset": 2048, 00:15:24.024 "data_size": 63488 00:15:24.024 }, 00:15:24.024 { 00:15:24.024 "name": "BaseBdev3", 00:15:24.024 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:24.024 "is_configured": true, 00:15:24.024 "data_offset": 2048, 00:15:24.024 "data_size": 63488 00:15:24.024 } 00:15:24.024 ] 00:15:24.024 }' 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.024 10:43:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.596 "name": "raid_bdev1", 00:15:24.596 "uuid": "69948870-194c-463d-9ece-91efe74a2ba5", 00:15:24.596 "strip_size_kb": 64, 00:15:24.596 "state": "online", 00:15:24.596 "raid_level": "raid5f", 00:15:24.596 "superblock": true, 00:15:24.596 "num_base_bdevs": 3, 00:15:24.596 "num_base_bdevs_discovered": 2, 00:15:24.596 "num_base_bdevs_operational": 2, 00:15:24.596 "base_bdevs_list": [ 00:15:24.596 { 00:15:24.596 "name": null, 00:15:24.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.596 "is_configured": false, 00:15:24.596 "data_offset": 0, 00:15:24.596 "data_size": 63488 00:15:24.596 }, 00:15:24.596 { 00:15:24.596 "name": "BaseBdev2", 00:15:24.596 "uuid": "edfcab3f-7be6-5526-b05e-8c1f4416311c", 00:15:24.596 "is_configured": true, 00:15:24.596 "data_offset": 2048, 00:15:24.596 "data_size": 63488 00:15:24.596 }, 00:15:24.596 { 00:15:24.596 "name": "BaseBdev3", 00:15:24.596 "uuid": "cff43097-3649-59b6-a70e-92f320e97e50", 00:15:24.596 "is_configured": true, 00:15:24.596 "data_offset": 2048, 00:15:24.596 "data_size": 63488 00:15:24.596 } 00:15:24.596 ] 00:15:24.596 }' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81804 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81804 ']' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81804 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81804 00:15:24.596 killing process with pid 81804 00:15:24.596 Received shutdown signal, test time was about 60.000000 seconds 00:15:24.596 00:15:24.596 Latency(us) 00:15:24.596 [2024-11-18T10:43:50.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.596 [2024-11-18T10:43:50.481Z] =================================================================================================================== 00:15:24.596 [2024-11-18T10:43:50.481Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81804' 00:15:24.596 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81804 00:15:24.596 [2024-11-18 10:43:50.397978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.596 [2024-11-18 10:43:50.398086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.596 [2024-11-18 10:43:50.398141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 10:43:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81804 00:15:24.596 ee all in destruct 00:15:24.596 [2024-11-18 10:43:50.398155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:25.169 [2024-11-18 10:43:50.761908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.111 10:43:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:26.111 00:15:26.111 real 0m23.508s 00:15:26.111 user 0m30.246s 00:15:26.111 sys 0m3.104s 00:15:26.111 10:43:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.111 10:43:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.111 ************************************ 00:15:26.111 END TEST raid5f_rebuild_test_sb 00:15:26.111 ************************************ 00:15:26.111 10:43:51 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:26.111 10:43:51 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:26.111 10:43:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:26.111 10:43:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.111 10:43:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.111 ************************************ 00:15:26.111 START TEST raid5f_state_function_test 00:15:26.111 ************************************ 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82557 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82557' 00:15:26.112 Process raid pid: 82557 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82557 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82557 ']' 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.112 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 [2024-11-18 10:43:51.962162] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:26.112 [2024-11-18 10:43:51.962284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.373 [2024-11-18 10:43:52.130854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.373 [2024-11-18 10:43:52.235585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.633 [2024-11-18 10:43:52.430542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.633 [2024-11-18 10:43:52.430576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.205 [2024-11-18 10:43:52.785485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.205 [2024-11-18 10:43:52.785537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.205 [2024-11-18 10:43:52.785547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.205 [2024-11-18 10:43:52.785555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.205 [2024-11-18 10:43:52.785561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.205 [2024-11-18 10:43:52.785570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.205 [2024-11-18 10:43:52.785575] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:27.205 [2024-11-18 10:43:52.785584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.205 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.205 "name": "Existed_Raid", 00:15:27.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.205 "strip_size_kb": 64, 00:15:27.205 "state": "configuring", 00:15:27.205 "raid_level": "raid5f", 00:15:27.206 "superblock": false, 00:15:27.206 "num_base_bdevs": 4, 00:15:27.206 "num_base_bdevs_discovered": 0, 00:15:27.206 "num_base_bdevs_operational": 4, 00:15:27.206 "base_bdevs_list": [ 00:15:27.206 { 00:15:27.206 "name": "BaseBdev1", 00:15:27.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.206 "is_configured": false, 00:15:27.206 "data_offset": 0, 00:15:27.206 "data_size": 0 00:15:27.206 }, 00:15:27.206 { 00:15:27.206 "name": "BaseBdev2", 00:15:27.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.206 "is_configured": false, 00:15:27.206 "data_offset": 0, 00:15:27.206 "data_size": 0 00:15:27.206 }, 00:15:27.206 { 00:15:27.206 "name": "BaseBdev3", 00:15:27.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.206 "is_configured": false, 00:15:27.206 "data_offset": 0, 00:15:27.206 "data_size": 0 00:15:27.206 }, 00:15:27.206 { 00:15:27.206 "name": "BaseBdev4", 00:15:27.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.206 "is_configured": false, 00:15:27.206 "data_offset": 0, 00:15:27.206 "data_size": 0 00:15:27.206 } 00:15:27.206 ] 00:15:27.206 }' 00:15:27.206 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.206 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.467 [2024-11-18 10:43:53.264588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.467 [2024-11-18 10:43:53.264623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.467 [2024-11-18 10:43:53.276578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.467 [2024-11-18 10:43:53.276614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.467 [2024-11-18 10:43:53.276622] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.467 [2024-11-18 10:43:53.276630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.467 [2024-11-18 10:43:53.276636] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.467 [2024-11-18 10:43:53.276643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.467 [2024-11-18 10:43:53.276649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:27.467 [2024-11-18 10:43:53.276656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.467 [2024-11-18 10:43:53.322597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.467 BaseBdev1 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.467 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.467 [ 00:15:27.467 { 00:15:27.467 "name": "BaseBdev1", 00:15:27.467 "aliases": [ 00:15:27.467 "3370fc04-9c15-4eac-97dc-84d4165029e1" 00:15:27.467 ], 00:15:27.467 "product_name": "Malloc disk", 00:15:27.467 "block_size": 512, 00:15:27.467 "num_blocks": 65536, 00:15:27.467 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:27.728 "assigned_rate_limits": { 00:15:27.728 "rw_ios_per_sec": 0, 00:15:27.728 "rw_mbytes_per_sec": 0, 00:15:27.728 "r_mbytes_per_sec": 0, 00:15:27.728 "w_mbytes_per_sec": 0 00:15:27.728 }, 00:15:27.728 "claimed": true, 00:15:27.728 "claim_type": "exclusive_write", 00:15:27.728 "zoned": false, 00:15:27.728 "supported_io_types": { 00:15:27.728 "read": true, 00:15:27.728 "write": true, 00:15:27.728 "unmap": true, 00:15:27.728 "flush": true, 00:15:27.729 "reset": true, 00:15:27.729 "nvme_admin": false, 00:15:27.729 "nvme_io": false, 00:15:27.729 "nvme_io_md": false, 00:15:27.729 "write_zeroes": true, 00:15:27.729 "zcopy": true, 00:15:27.729 "get_zone_info": false, 00:15:27.729 "zone_management": false, 00:15:27.729 "zone_append": false, 00:15:27.729 "compare": false, 00:15:27.729 "compare_and_write": false, 00:15:27.729 "abort": true, 00:15:27.729 "seek_hole": false, 00:15:27.729 "seek_data": false, 00:15:27.729 "copy": true, 00:15:27.729 "nvme_iov_md": false 00:15:27.729 }, 00:15:27.729 "memory_domains": [ 00:15:27.729 { 00:15:27.729 "dma_device_id": "system", 00:15:27.729 "dma_device_type": 1 00:15:27.729 }, 00:15:27.729 { 00:15:27.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.729 "dma_device_type": 2 00:15:27.729 } 00:15:27.729 ], 00:15:27.729 "driver_specific": {} 00:15:27.729 } 00:15:27.729 ] 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.729 "name": "Existed_Raid", 00:15:27.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.729 "strip_size_kb": 64, 00:15:27.729 "state": "configuring", 00:15:27.729 "raid_level": "raid5f", 00:15:27.729 "superblock": false, 00:15:27.729 "num_base_bdevs": 4, 00:15:27.729 "num_base_bdevs_discovered": 1, 00:15:27.729 "num_base_bdevs_operational": 4, 00:15:27.729 "base_bdevs_list": [ 00:15:27.729 { 00:15:27.729 "name": "BaseBdev1", 00:15:27.729 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:27.729 "is_configured": true, 00:15:27.729 "data_offset": 0, 00:15:27.729 "data_size": 65536 00:15:27.729 }, 00:15:27.729 { 00:15:27.729 "name": "BaseBdev2", 00:15:27.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.729 "is_configured": false, 00:15:27.729 "data_offset": 0, 00:15:27.729 "data_size": 0 00:15:27.729 }, 00:15:27.729 { 00:15:27.729 "name": "BaseBdev3", 00:15:27.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.729 "is_configured": false, 00:15:27.729 "data_offset": 0, 00:15:27.729 "data_size": 0 00:15:27.729 }, 00:15:27.729 { 00:15:27.729 "name": "BaseBdev4", 00:15:27.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.729 "is_configured": false, 00:15:27.729 "data_offset": 0, 00:15:27.729 "data_size": 0 00:15:27.729 } 00:15:27.729 ] 00:15:27.729 }' 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.729 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.990 [2024-11-18 10:43:53.801909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.990 [2024-11-18 10:43:53.801951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.990 [2024-11-18 10:43:53.813944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.990 [2024-11-18 10:43:53.815649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.990 [2024-11-18 10:43:53.815687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.990 [2024-11-18 10:43:53.815696] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.990 [2024-11-18 10:43:53.815706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.990 [2024-11-18 10:43:53.815713] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:27.990 [2024-11-18 10:43:53.815721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.990 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.990 "name": "Existed_Raid", 00:15:27.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.990 "strip_size_kb": 64, 00:15:27.990 "state": "configuring", 00:15:27.990 "raid_level": "raid5f", 00:15:27.990 "superblock": false, 00:15:27.990 "num_base_bdevs": 4, 00:15:27.990 "num_base_bdevs_discovered": 1, 00:15:27.990 "num_base_bdevs_operational": 4, 00:15:27.990 "base_bdevs_list": [ 00:15:27.990 { 00:15:27.990 "name": "BaseBdev1", 00:15:27.990 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:27.990 "is_configured": true, 00:15:27.990 "data_offset": 0, 00:15:27.990 "data_size": 65536 00:15:27.990 }, 00:15:27.990 { 00:15:27.990 "name": "BaseBdev2", 00:15:27.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.990 "is_configured": false, 00:15:27.990 "data_offset": 0, 00:15:27.990 "data_size": 0 00:15:27.990 }, 00:15:27.990 { 00:15:27.990 "name": "BaseBdev3", 00:15:27.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.990 "is_configured": false, 00:15:27.990 "data_offset": 0, 00:15:27.990 "data_size": 0 00:15:27.990 }, 00:15:27.990 { 00:15:27.990 "name": "BaseBdev4", 00:15:27.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.990 "is_configured": false, 00:15:27.990 "data_offset": 0, 00:15:27.990 "data_size": 0 00:15:27.990 } 00:15:27.990 ] 00:15:27.991 }' 00:15:27.991 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.991 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.562 [2024-11-18 10:43:54.327691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.562 BaseBdev2 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.562 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.562 [ 00:15:28.562 { 00:15:28.562 "name": "BaseBdev2", 00:15:28.562 "aliases": [ 00:15:28.562 "24bb1e06-efa9-482c-971b-16171e47c284" 00:15:28.562 ], 00:15:28.562 "product_name": "Malloc disk", 00:15:28.562 "block_size": 512, 00:15:28.562 "num_blocks": 65536, 00:15:28.562 "uuid": "24bb1e06-efa9-482c-971b-16171e47c284", 00:15:28.562 "assigned_rate_limits": { 00:15:28.562 "rw_ios_per_sec": 0, 00:15:28.562 "rw_mbytes_per_sec": 0, 00:15:28.562 "r_mbytes_per_sec": 0, 00:15:28.562 "w_mbytes_per_sec": 0 00:15:28.563 }, 00:15:28.563 "claimed": true, 00:15:28.563 "claim_type": "exclusive_write", 00:15:28.563 "zoned": false, 00:15:28.563 "supported_io_types": { 00:15:28.563 "read": true, 00:15:28.563 "write": true, 00:15:28.563 "unmap": true, 00:15:28.563 "flush": true, 00:15:28.563 "reset": true, 00:15:28.563 "nvme_admin": false, 00:15:28.563 "nvme_io": false, 00:15:28.563 "nvme_io_md": false, 00:15:28.563 "write_zeroes": true, 00:15:28.563 "zcopy": true, 00:15:28.563 "get_zone_info": false, 00:15:28.563 "zone_management": false, 00:15:28.563 "zone_append": false, 00:15:28.563 "compare": false, 00:15:28.563 "compare_and_write": false, 00:15:28.563 "abort": true, 00:15:28.563 "seek_hole": false, 00:15:28.563 "seek_data": false, 00:15:28.563 "copy": true, 00:15:28.563 "nvme_iov_md": false 00:15:28.563 }, 00:15:28.563 "memory_domains": [ 00:15:28.563 { 00:15:28.563 "dma_device_id": "system", 00:15:28.563 "dma_device_type": 1 00:15:28.563 }, 00:15:28.563 { 00:15:28.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.563 "dma_device_type": 2 00:15:28.563 } 00:15:28.563 ], 00:15:28.563 "driver_specific": {} 00:15:28.563 } 00:15:28.563 ] 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.563 "name": "Existed_Raid", 00:15:28.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.563 "strip_size_kb": 64, 00:15:28.563 "state": "configuring", 00:15:28.563 "raid_level": "raid5f", 00:15:28.563 "superblock": false, 00:15:28.563 "num_base_bdevs": 4, 00:15:28.563 "num_base_bdevs_discovered": 2, 00:15:28.563 "num_base_bdevs_operational": 4, 00:15:28.563 "base_bdevs_list": [ 00:15:28.563 { 00:15:28.563 "name": "BaseBdev1", 00:15:28.563 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:28.563 "is_configured": true, 00:15:28.563 "data_offset": 0, 00:15:28.563 "data_size": 65536 00:15:28.563 }, 00:15:28.563 { 00:15:28.563 "name": "BaseBdev2", 00:15:28.563 "uuid": "24bb1e06-efa9-482c-971b-16171e47c284", 00:15:28.563 "is_configured": true, 00:15:28.563 "data_offset": 0, 00:15:28.563 "data_size": 65536 00:15:28.563 }, 00:15:28.563 { 00:15:28.563 "name": "BaseBdev3", 00:15:28.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.563 "is_configured": false, 00:15:28.563 "data_offset": 0, 00:15:28.563 "data_size": 0 00:15:28.563 }, 00:15:28.563 { 00:15:28.563 "name": "BaseBdev4", 00:15:28.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.563 "is_configured": false, 00:15:28.563 "data_offset": 0, 00:15:28.563 "data_size": 0 00:15:28.563 } 00:15:28.563 ] 00:15:28.563 }' 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.563 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.135 [2024-11-18 10:43:54.905625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.135 BaseBdev3 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.135 [ 00:15:29.135 { 00:15:29.135 "name": "BaseBdev3", 00:15:29.135 "aliases": [ 00:15:29.135 "21e3d29a-fe91-408e-8415-c84d9ff22cde" 00:15:29.135 ], 00:15:29.135 "product_name": "Malloc disk", 00:15:29.135 "block_size": 512, 00:15:29.135 "num_blocks": 65536, 00:15:29.135 "uuid": "21e3d29a-fe91-408e-8415-c84d9ff22cde", 00:15:29.135 "assigned_rate_limits": { 00:15:29.135 "rw_ios_per_sec": 0, 00:15:29.135 "rw_mbytes_per_sec": 0, 00:15:29.135 "r_mbytes_per_sec": 0, 00:15:29.135 "w_mbytes_per_sec": 0 00:15:29.135 }, 00:15:29.135 "claimed": true, 00:15:29.135 "claim_type": "exclusive_write", 00:15:29.135 "zoned": false, 00:15:29.135 "supported_io_types": { 00:15:29.135 "read": true, 00:15:29.135 "write": true, 00:15:29.135 "unmap": true, 00:15:29.135 "flush": true, 00:15:29.135 "reset": true, 00:15:29.135 "nvme_admin": false, 00:15:29.135 "nvme_io": false, 00:15:29.135 "nvme_io_md": false, 00:15:29.135 "write_zeroes": true, 00:15:29.135 "zcopy": true, 00:15:29.135 "get_zone_info": false, 00:15:29.135 "zone_management": false, 00:15:29.135 "zone_append": false, 00:15:29.135 "compare": false, 00:15:29.135 "compare_and_write": false, 00:15:29.135 "abort": true, 00:15:29.135 "seek_hole": false, 00:15:29.135 "seek_data": false, 00:15:29.135 "copy": true, 00:15:29.135 "nvme_iov_md": false 00:15:29.135 }, 00:15:29.135 "memory_domains": [ 00:15:29.135 { 00:15:29.135 "dma_device_id": "system", 00:15:29.135 "dma_device_type": 1 00:15:29.135 }, 00:15:29.135 { 00:15:29.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.135 "dma_device_type": 2 00:15:29.135 } 00:15:29.135 ], 00:15:29.135 "driver_specific": {} 00:15:29.135 } 00:15:29.135 ] 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.135 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.136 "name": "Existed_Raid", 00:15:29.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.136 "strip_size_kb": 64, 00:15:29.136 "state": "configuring", 00:15:29.136 "raid_level": "raid5f", 00:15:29.136 "superblock": false, 00:15:29.136 "num_base_bdevs": 4, 00:15:29.136 "num_base_bdevs_discovered": 3, 00:15:29.136 "num_base_bdevs_operational": 4, 00:15:29.136 "base_bdevs_list": [ 00:15:29.136 { 00:15:29.136 "name": "BaseBdev1", 00:15:29.136 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:29.136 "is_configured": true, 00:15:29.136 "data_offset": 0, 00:15:29.136 "data_size": 65536 00:15:29.136 }, 00:15:29.136 { 00:15:29.136 "name": "BaseBdev2", 00:15:29.136 "uuid": "24bb1e06-efa9-482c-971b-16171e47c284", 00:15:29.136 "is_configured": true, 00:15:29.136 "data_offset": 0, 00:15:29.136 "data_size": 65536 00:15:29.136 }, 00:15:29.136 { 00:15:29.136 "name": "BaseBdev3", 00:15:29.136 "uuid": "21e3d29a-fe91-408e-8415-c84d9ff22cde", 00:15:29.136 "is_configured": true, 00:15:29.136 "data_offset": 0, 00:15:29.136 "data_size": 65536 00:15:29.136 }, 00:15:29.136 { 00:15:29.136 "name": "BaseBdev4", 00:15:29.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.136 "is_configured": false, 00:15:29.136 "data_offset": 0, 00:15:29.136 "data_size": 0 00:15:29.136 } 00:15:29.136 ] 00:15:29.136 }' 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.136 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.708 [2024-11-18 10:43:55.390746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.708 [2024-11-18 10:43:55.390874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:29.708 [2024-11-18 10:43:55.390902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:29.708 [2024-11-18 10:43:55.391219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:29.708 [2024-11-18 10:43:55.398121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:29.708 [2024-11-18 10:43:55.398192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:29.708 [2024-11-18 10:43:55.398480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.708 BaseBdev4 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.708 [ 00:15:29.708 { 00:15:29.708 "name": "BaseBdev4", 00:15:29.708 "aliases": [ 00:15:29.708 "449967ca-59c3-4301-a65f-8c1dc48ab1e2" 00:15:29.708 ], 00:15:29.708 "product_name": "Malloc disk", 00:15:29.708 "block_size": 512, 00:15:29.708 "num_blocks": 65536, 00:15:29.708 "uuid": "449967ca-59c3-4301-a65f-8c1dc48ab1e2", 00:15:29.708 "assigned_rate_limits": { 00:15:29.708 "rw_ios_per_sec": 0, 00:15:29.708 "rw_mbytes_per_sec": 0, 00:15:29.708 "r_mbytes_per_sec": 0, 00:15:29.708 "w_mbytes_per_sec": 0 00:15:29.708 }, 00:15:29.708 "claimed": true, 00:15:29.708 "claim_type": "exclusive_write", 00:15:29.708 "zoned": false, 00:15:29.708 "supported_io_types": { 00:15:29.708 "read": true, 00:15:29.708 "write": true, 00:15:29.708 "unmap": true, 00:15:29.708 "flush": true, 00:15:29.708 "reset": true, 00:15:29.708 "nvme_admin": false, 00:15:29.708 "nvme_io": false, 00:15:29.708 "nvme_io_md": false, 00:15:29.708 "write_zeroes": true, 00:15:29.708 "zcopy": true, 00:15:29.708 "get_zone_info": false, 00:15:29.708 "zone_management": false, 00:15:29.708 "zone_append": false, 00:15:29.708 "compare": false, 00:15:29.708 "compare_and_write": false, 00:15:29.708 "abort": true, 00:15:29.708 "seek_hole": false, 00:15:29.708 "seek_data": false, 00:15:29.708 "copy": true, 00:15:29.708 "nvme_iov_md": false 00:15:29.708 }, 00:15:29.708 "memory_domains": [ 00:15:29.708 { 00:15:29.708 "dma_device_id": "system", 00:15:29.708 "dma_device_type": 1 00:15:29.708 }, 00:15:29.708 { 00:15:29.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.708 "dma_device_type": 2 00:15:29.708 } 00:15:29.708 ], 00:15:29.708 "driver_specific": {} 00:15:29.708 } 00:15:29.708 ] 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.708 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.709 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.709 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.709 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.709 "name": "Existed_Raid", 00:15:29.709 "uuid": "a412aa4e-2e1d-4058-8a3e-faa1497b60d7", 00:15:29.709 "strip_size_kb": 64, 00:15:29.709 "state": "online", 00:15:29.709 "raid_level": "raid5f", 00:15:29.709 "superblock": false, 00:15:29.709 "num_base_bdevs": 4, 00:15:29.709 "num_base_bdevs_discovered": 4, 00:15:29.709 "num_base_bdevs_operational": 4, 00:15:29.709 "base_bdevs_list": [ 00:15:29.709 { 00:15:29.709 "name": "BaseBdev1", 00:15:29.709 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:29.709 "is_configured": true, 00:15:29.709 "data_offset": 0, 00:15:29.709 "data_size": 65536 00:15:29.709 }, 00:15:29.709 { 00:15:29.709 "name": "BaseBdev2", 00:15:29.709 "uuid": "24bb1e06-efa9-482c-971b-16171e47c284", 00:15:29.709 "is_configured": true, 00:15:29.709 "data_offset": 0, 00:15:29.709 "data_size": 65536 00:15:29.709 }, 00:15:29.709 { 00:15:29.709 "name": "BaseBdev3", 00:15:29.709 "uuid": "21e3d29a-fe91-408e-8415-c84d9ff22cde", 00:15:29.709 "is_configured": true, 00:15:29.709 "data_offset": 0, 00:15:29.709 "data_size": 65536 00:15:29.709 }, 00:15:29.709 { 00:15:29.709 "name": "BaseBdev4", 00:15:29.709 "uuid": "449967ca-59c3-4301-a65f-8c1dc48ab1e2", 00:15:29.709 "is_configured": true, 00:15:29.709 "data_offset": 0, 00:15:29.709 "data_size": 65536 00:15:29.709 } 00:15:29.709 ] 00:15:29.709 }' 00:15:29.709 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.709 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.279 [2024-11-18 10:43:55.869588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.279 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:30.279 "name": "Existed_Raid", 00:15:30.279 "aliases": [ 00:15:30.279 "a412aa4e-2e1d-4058-8a3e-faa1497b60d7" 00:15:30.279 ], 00:15:30.279 "product_name": "Raid Volume", 00:15:30.279 "block_size": 512, 00:15:30.279 "num_blocks": 196608, 00:15:30.279 "uuid": "a412aa4e-2e1d-4058-8a3e-faa1497b60d7", 00:15:30.279 "assigned_rate_limits": { 00:15:30.279 "rw_ios_per_sec": 0, 00:15:30.279 "rw_mbytes_per_sec": 0, 00:15:30.279 "r_mbytes_per_sec": 0, 00:15:30.279 "w_mbytes_per_sec": 0 00:15:30.279 }, 00:15:30.279 "claimed": false, 00:15:30.279 "zoned": false, 00:15:30.279 "supported_io_types": { 00:15:30.279 "read": true, 00:15:30.279 "write": true, 00:15:30.279 "unmap": false, 00:15:30.279 "flush": false, 00:15:30.279 "reset": true, 00:15:30.279 "nvme_admin": false, 00:15:30.279 "nvme_io": false, 00:15:30.279 "nvme_io_md": false, 00:15:30.279 "write_zeroes": true, 00:15:30.279 "zcopy": false, 00:15:30.279 "get_zone_info": false, 00:15:30.279 "zone_management": false, 00:15:30.279 "zone_append": false, 00:15:30.279 "compare": false, 00:15:30.279 "compare_and_write": false, 00:15:30.279 "abort": false, 00:15:30.279 "seek_hole": false, 00:15:30.279 "seek_data": false, 00:15:30.279 "copy": false, 00:15:30.279 "nvme_iov_md": false 00:15:30.279 }, 00:15:30.279 "driver_specific": { 00:15:30.279 "raid": { 00:15:30.279 "uuid": "a412aa4e-2e1d-4058-8a3e-faa1497b60d7", 00:15:30.279 "strip_size_kb": 64, 00:15:30.279 "state": "online", 00:15:30.279 "raid_level": "raid5f", 00:15:30.279 "superblock": false, 00:15:30.279 "num_base_bdevs": 4, 00:15:30.279 "num_base_bdevs_discovered": 4, 00:15:30.279 "num_base_bdevs_operational": 4, 00:15:30.279 "base_bdevs_list": [ 00:15:30.279 { 00:15:30.279 "name": "BaseBdev1", 00:15:30.279 "uuid": "3370fc04-9c15-4eac-97dc-84d4165029e1", 00:15:30.279 "is_configured": true, 00:15:30.279 "data_offset": 0, 00:15:30.279 "data_size": 65536 00:15:30.279 }, 00:15:30.279 { 00:15:30.279 "name": "BaseBdev2", 00:15:30.279 "uuid": "24bb1e06-efa9-482c-971b-16171e47c284", 00:15:30.279 "is_configured": true, 00:15:30.279 "data_offset": 0, 00:15:30.279 "data_size": 65536 00:15:30.279 }, 00:15:30.279 { 00:15:30.279 "name": "BaseBdev3", 00:15:30.279 "uuid": "21e3d29a-fe91-408e-8415-c84d9ff22cde", 00:15:30.279 "is_configured": true, 00:15:30.279 "data_offset": 0, 00:15:30.279 "data_size": 65536 00:15:30.279 }, 00:15:30.279 { 00:15:30.279 "name": "BaseBdev4", 00:15:30.279 "uuid": "449967ca-59c3-4301-a65f-8c1dc48ab1e2", 00:15:30.279 "is_configured": true, 00:15:30.280 "data_offset": 0, 00:15:30.280 "data_size": 65536 00:15:30.280 } 00:15:30.280 ] 00:15:30.280 } 00:15:30.280 } 00:15:30.280 }' 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:30.280 BaseBdev2 00:15:30.280 BaseBdev3 00:15:30.280 BaseBdev4' 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 10:43:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.540 [2024-11-18 10:43:56.180974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.540 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.540 "name": "Existed_Raid", 00:15:30.540 "uuid": "a412aa4e-2e1d-4058-8a3e-faa1497b60d7", 00:15:30.540 "strip_size_kb": 64, 00:15:30.540 "state": "online", 00:15:30.540 "raid_level": "raid5f", 00:15:30.540 "superblock": false, 00:15:30.540 "num_base_bdevs": 4, 00:15:30.540 "num_base_bdevs_discovered": 3, 00:15:30.540 "num_base_bdevs_operational": 3, 00:15:30.540 "base_bdevs_list": [ 00:15:30.540 { 00:15:30.540 "name": null, 00:15:30.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.540 "is_configured": false, 00:15:30.540 "data_offset": 0, 00:15:30.540 "data_size": 65536 00:15:30.540 }, 00:15:30.540 { 00:15:30.540 "name": "BaseBdev2", 00:15:30.540 "uuid": "24bb1e06-efa9-482c-971b-16171e47c284", 00:15:30.540 "is_configured": true, 00:15:30.540 "data_offset": 0, 00:15:30.540 "data_size": 65536 00:15:30.540 }, 00:15:30.540 { 00:15:30.540 "name": "BaseBdev3", 00:15:30.540 "uuid": "21e3d29a-fe91-408e-8415-c84d9ff22cde", 00:15:30.540 "is_configured": true, 00:15:30.540 "data_offset": 0, 00:15:30.540 "data_size": 65536 00:15:30.540 }, 00:15:30.540 { 00:15:30.540 "name": "BaseBdev4", 00:15:30.540 "uuid": "449967ca-59c3-4301-a65f-8c1dc48ab1e2", 00:15:30.540 "is_configured": true, 00:15:30.540 "data_offset": 0, 00:15:30.540 "data_size": 65536 00:15:30.540 } 00:15:30.540 ] 00:15:30.540 }' 00:15:30.541 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.541 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.111 [2024-11-18 10:43:56.748297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:31.111 [2024-11-18 10:43:56.748391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.111 [2024-11-18 10:43:56.837623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.111 [2024-11-18 10:43:56.897545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.111 10:43:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.372 [2024-11-18 10:43:57.040743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:31.372 [2024-11-18 10:43:57.040841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.372 BaseBdev2 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.372 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.372 [ 00:15:31.372 { 00:15:31.372 "name": "BaseBdev2", 00:15:31.372 "aliases": [ 00:15:31.372 "d9d7583e-3e0a-438a-8228-f7fdad5e782c" 00:15:31.372 ], 00:15:31.372 "product_name": "Malloc disk", 00:15:31.372 "block_size": 512, 00:15:31.372 "num_blocks": 65536, 00:15:31.372 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:31.372 "assigned_rate_limits": { 00:15:31.372 "rw_ios_per_sec": 0, 00:15:31.372 "rw_mbytes_per_sec": 0, 00:15:31.372 "r_mbytes_per_sec": 0, 00:15:31.372 "w_mbytes_per_sec": 0 00:15:31.372 }, 00:15:31.372 "claimed": false, 00:15:31.372 "zoned": false, 00:15:31.372 "supported_io_types": { 00:15:31.372 "read": true, 00:15:31.372 "write": true, 00:15:31.372 "unmap": true, 00:15:31.372 "flush": true, 00:15:31.372 "reset": true, 00:15:31.372 "nvme_admin": false, 00:15:31.372 "nvme_io": false, 00:15:31.372 "nvme_io_md": false, 00:15:31.372 "write_zeroes": true, 00:15:31.372 "zcopy": true, 00:15:31.372 "get_zone_info": false, 00:15:31.372 "zone_management": false, 00:15:31.372 "zone_append": false, 00:15:31.372 "compare": false, 00:15:31.633 "compare_and_write": false, 00:15:31.633 "abort": true, 00:15:31.633 "seek_hole": false, 00:15:31.633 "seek_data": false, 00:15:31.633 "copy": true, 00:15:31.633 "nvme_iov_md": false 00:15:31.633 }, 00:15:31.633 "memory_domains": [ 00:15:31.633 { 00:15:31.633 "dma_device_id": "system", 00:15:31.633 "dma_device_type": 1 00:15:31.633 }, 00:15:31.633 { 00:15:31.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.633 "dma_device_type": 2 00:15:31.633 } 00:15:31.633 ], 00:15:31.633 "driver_specific": {} 00:15:31.633 } 00:15:31.633 ] 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.633 BaseBdev3 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.633 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.634 [ 00:15:31.634 { 00:15:31.634 "name": "BaseBdev3", 00:15:31.634 "aliases": [ 00:15:31.634 "20382660-3fd1-4a1b-82af-c8e0019ccee9" 00:15:31.634 ], 00:15:31.634 "product_name": "Malloc disk", 00:15:31.634 "block_size": 512, 00:15:31.634 "num_blocks": 65536, 00:15:31.634 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:31.634 "assigned_rate_limits": { 00:15:31.634 "rw_ios_per_sec": 0, 00:15:31.634 "rw_mbytes_per_sec": 0, 00:15:31.634 "r_mbytes_per_sec": 0, 00:15:31.634 "w_mbytes_per_sec": 0 00:15:31.634 }, 00:15:31.634 "claimed": false, 00:15:31.634 "zoned": false, 00:15:31.634 "supported_io_types": { 00:15:31.634 "read": true, 00:15:31.634 "write": true, 00:15:31.634 "unmap": true, 00:15:31.634 "flush": true, 00:15:31.634 "reset": true, 00:15:31.634 "nvme_admin": false, 00:15:31.634 "nvme_io": false, 00:15:31.634 "nvme_io_md": false, 00:15:31.634 "write_zeroes": true, 00:15:31.634 "zcopy": true, 00:15:31.634 "get_zone_info": false, 00:15:31.634 "zone_management": false, 00:15:31.634 "zone_append": false, 00:15:31.634 "compare": false, 00:15:31.634 "compare_and_write": false, 00:15:31.634 "abort": true, 00:15:31.634 "seek_hole": false, 00:15:31.634 "seek_data": false, 00:15:31.634 "copy": true, 00:15:31.634 "nvme_iov_md": false 00:15:31.634 }, 00:15:31.634 "memory_domains": [ 00:15:31.634 { 00:15:31.634 "dma_device_id": "system", 00:15:31.634 "dma_device_type": 1 00:15:31.634 }, 00:15:31.634 { 00:15:31.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.634 "dma_device_type": 2 00:15:31.634 } 00:15:31.634 ], 00:15:31.634 "driver_specific": {} 00:15:31.634 } 00:15:31.634 ] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.634 BaseBdev4 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.634 [ 00:15:31.634 { 00:15:31.634 "name": "BaseBdev4", 00:15:31.634 "aliases": [ 00:15:31.634 "55f41d40-bed8-496f-9f12-f9dce35ff052" 00:15:31.634 ], 00:15:31.634 "product_name": "Malloc disk", 00:15:31.634 "block_size": 512, 00:15:31.634 "num_blocks": 65536, 00:15:31.634 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:31.634 "assigned_rate_limits": { 00:15:31.634 "rw_ios_per_sec": 0, 00:15:31.634 "rw_mbytes_per_sec": 0, 00:15:31.634 "r_mbytes_per_sec": 0, 00:15:31.634 "w_mbytes_per_sec": 0 00:15:31.634 }, 00:15:31.634 "claimed": false, 00:15:31.634 "zoned": false, 00:15:31.634 "supported_io_types": { 00:15:31.634 "read": true, 00:15:31.634 "write": true, 00:15:31.634 "unmap": true, 00:15:31.634 "flush": true, 00:15:31.634 "reset": true, 00:15:31.634 "nvme_admin": false, 00:15:31.634 "nvme_io": false, 00:15:31.634 "nvme_io_md": false, 00:15:31.634 "write_zeroes": true, 00:15:31.634 "zcopy": true, 00:15:31.634 "get_zone_info": false, 00:15:31.634 "zone_management": false, 00:15:31.634 "zone_append": false, 00:15:31.634 "compare": false, 00:15:31.634 "compare_and_write": false, 00:15:31.634 "abort": true, 00:15:31.634 "seek_hole": false, 00:15:31.634 "seek_data": false, 00:15:31.634 "copy": true, 00:15:31.634 "nvme_iov_md": false 00:15:31.634 }, 00:15:31.634 "memory_domains": [ 00:15:31.634 { 00:15:31.634 "dma_device_id": "system", 00:15:31.634 "dma_device_type": 1 00:15:31.634 }, 00:15:31.634 { 00:15:31.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.634 "dma_device_type": 2 00:15:31.634 } 00:15:31.634 ], 00:15:31.634 "driver_specific": {} 00:15:31.634 } 00:15:31.634 ] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.634 [2024-11-18 10:43:57.422505] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.634 [2024-11-18 10:43:57.422633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.634 [2024-11-18 10:43:57.422671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.634 [2024-11-18 10:43:57.424323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.634 [2024-11-18 10:43:57.424412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.634 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.635 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.635 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.635 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.635 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.635 "name": "Existed_Raid", 00:15:31.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.635 "strip_size_kb": 64, 00:15:31.635 "state": "configuring", 00:15:31.635 "raid_level": "raid5f", 00:15:31.635 "superblock": false, 00:15:31.635 "num_base_bdevs": 4, 00:15:31.635 "num_base_bdevs_discovered": 3, 00:15:31.635 "num_base_bdevs_operational": 4, 00:15:31.635 "base_bdevs_list": [ 00:15:31.635 { 00:15:31.635 "name": "BaseBdev1", 00:15:31.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.635 "is_configured": false, 00:15:31.635 "data_offset": 0, 00:15:31.635 "data_size": 0 00:15:31.635 }, 00:15:31.635 { 00:15:31.635 "name": "BaseBdev2", 00:15:31.635 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:31.635 "is_configured": true, 00:15:31.635 "data_offset": 0, 00:15:31.635 "data_size": 65536 00:15:31.635 }, 00:15:31.635 { 00:15:31.635 "name": "BaseBdev3", 00:15:31.635 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:31.635 "is_configured": true, 00:15:31.635 "data_offset": 0, 00:15:31.635 "data_size": 65536 00:15:31.635 }, 00:15:31.635 { 00:15:31.635 "name": "BaseBdev4", 00:15:31.635 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:31.635 "is_configured": true, 00:15:31.635 "data_offset": 0, 00:15:31.635 "data_size": 65536 00:15:31.635 } 00:15:31.635 ] 00:15:31.635 }' 00:15:31.635 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.635 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.206 [2024-11-18 10:43:57.893719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.206 "name": "Existed_Raid", 00:15:32.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.206 "strip_size_kb": 64, 00:15:32.206 "state": "configuring", 00:15:32.206 "raid_level": "raid5f", 00:15:32.206 "superblock": false, 00:15:32.206 "num_base_bdevs": 4, 00:15:32.206 "num_base_bdevs_discovered": 2, 00:15:32.206 "num_base_bdevs_operational": 4, 00:15:32.206 "base_bdevs_list": [ 00:15:32.206 { 00:15:32.206 "name": "BaseBdev1", 00:15:32.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.206 "is_configured": false, 00:15:32.206 "data_offset": 0, 00:15:32.206 "data_size": 0 00:15:32.206 }, 00:15:32.206 { 00:15:32.206 "name": null, 00:15:32.206 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:32.206 "is_configured": false, 00:15:32.206 "data_offset": 0, 00:15:32.206 "data_size": 65536 00:15:32.206 }, 00:15:32.206 { 00:15:32.206 "name": "BaseBdev3", 00:15:32.206 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:32.206 "is_configured": true, 00:15:32.206 "data_offset": 0, 00:15:32.206 "data_size": 65536 00:15:32.206 }, 00:15:32.206 { 00:15:32.206 "name": "BaseBdev4", 00:15:32.206 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:32.206 "is_configured": true, 00:15:32.206 "data_offset": 0, 00:15:32.206 "data_size": 65536 00:15:32.206 } 00:15:32.206 ] 00:15:32.206 }' 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.206 10:43:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.776 [2024-11-18 10:43:58.444248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.776 BaseBdev1 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.776 [ 00:15:32.776 { 00:15:32.776 "name": "BaseBdev1", 00:15:32.776 "aliases": [ 00:15:32.776 "887f15ce-8234-4239-a7c1-e31c58099991" 00:15:32.776 ], 00:15:32.776 "product_name": "Malloc disk", 00:15:32.776 "block_size": 512, 00:15:32.776 "num_blocks": 65536, 00:15:32.776 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:32.776 "assigned_rate_limits": { 00:15:32.776 "rw_ios_per_sec": 0, 00:15:32.776 "rw_mbytes_per_sec": 0, 00:15:32.776 "r_mbytes_per_sec": 0, 00:15:32.776 "w_mbytes_per_sec": 0 00:15:32.776 }, 00:15:32.776 "claimed": true, 00:15:32.776 "claim_type": "exclusive_write", 00:15:32.776 "zoned": false, 00:15:32.776 "supported_io_types": { 00:15:32.776 "read": true, 00:15:32.776 "write": true, 00:15:32.776 "unmap": true, 00:15:32.776 "flush": true, 00:15:32.776 "reset": true, 00:15:32.776 "nvme_admin": false, 00:15:32.776 "nvme_io": false, 00:15:32.776 "nvme_io_md": false, 00:15:32.776 "write_zeroes": true, 00:15:32.776 "zcopy": true, 00:15:32.776 "get_zone_info": false, 00:15:32.776 "zone_management": false, 00:15:32.776 "zone_append": false, 00:15:32.776 "compare": false, 00:15:32.776 "compare_and_write": false, 00:15:32.776 "abort": true, 00:15:32.776 "seek_hole": false, 00:15:32.776 "seek_data": false, 00:15:32.776 "copy": true, 00:15:32.776 "nvme_iov_md": false 00:15:32.776 }, 00:15:32.776 "memory_domains": [ 00:15:32.776 { 00:15:32.776 "dma_device_id": "system", 00:15:32.776 "dma_device_type": 1 00:15:32.776 }, 00:15:32.776 { 00:15:32.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.776 "dma_device_type": 2 00:15:32.776 } 00:15:32.776 ], 00:15:32.776 "driver_specific": {} 00:15:32.776 } 00:15:32.776 ] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.776 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.777 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.777 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.777 "name": "Existed_Raid", 00:15:32.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.777 "strip_size_kb": 64, 00:15:32.777 "state": "configuring", 00:15:32.777 "raid_level": "raid5f", 00:15:32.777 "superblock": false, 00:15:32.777 "num_base_bdevs": 4, 00:15:32.777 "num_base_bdevs_discovered": 3, 00:15:32.777 "num_base_bdevs_operational": 4, 00:15:32.777 "base_bdevs_list": [ 00:15:32.777 { 00:15:32.777 "name": "BaseBdev1", 00:15:32.777 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:32.777 "is_configured": true, 00:15:32.777 "data_offset": 0, 00:15:32.777 "data_size": 65536 00:15:32.777 }, 00:15:32.777 { 00:15:32.777 "name": null, 00:15:32.777 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:32.777 "is_configured": false, 00:15:32.777 "data_offset": 0, 00:15:32.777 "data_size": 65536 00:15:32.777 }, 00:15:32.777 { 00:15:32.777 "name": "BaseBdev3", 00:15:32.777 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:32.777 "is_configured": true, 00:15:32.777 "data_offset": 0, 00:15:32.777 "data_size": 65536 00:15:32.777 }, 00:15:32.777 { 00:15:32.777 "name": "BaseBdev4", 00:15:32.777 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:32.777 "is_configured": true, 00:15:32.777 "data_offset": 0, 00:15:32.777 "data_size": 65536 00:15:32.777 } 00:15:32.777 ] 00:15:32.777 }' 00:15:32.777 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.777 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.347 [2024-11-18 10:43:58.975345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.347 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.348 10:43:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.348 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.348 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.348 "name": "Existed_Raid", 00:15:33.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.348 "strip_size_kb": 64, 00:15:33.348 "state": "configuring", 00:15:33.348 "raid_level": "raid5f", 00:15:33.348 "superblock": false, 00:15:33.348 "num_base_bdevs": 4, 00:15:33.348 "num_base_bdevs_discovered": 2, 00:15:33.348 "num_base_bdevs_operational": 4, 00:15:33.348 "base_bdevs_list": [ 00:15:33.348 { 00:15:33.348 "name": "BaseBdev1", 00:15:33.348 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:33.348 "is_configured": true, 00:15:33.348 "data_offset": 0, 00:15:33.348 "data_size": 65536 00:15:33.348 }, 00:15:33.348 { 00:15:33.348 "name": null, 00:15:33.348 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:33.348 "is_configured": false, 00:15:33.348 "data_offset": 0, 00:15:33.348 "data_size": 65536 00:15:33.348 }, 00:15:33.348 { 00:15:33.348 "name": null, 00:15:33.348 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:33.348 "is_configured": false, 00:15:33.348 "data_offset": 0, 00:15:33.348 "data_size": 65536 00:15:33.348 }, 00:15:33.348 { 00:15:33.348 "name": "BaseBdev4", 00:15:33.348 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:33.348 "is_configured": true, 00:15:33.348 "data_offset": 0, 00:15:33.348 "data_size": 65536 00:15:33.348 } 00:15:33.348 ] 00:15:33.348 }' 00:15:33.348 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.348 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.608 [2024-11-18 10:43:59.474661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.608 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.867 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.867 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.867 "name": "Existed_Raid", 00:15:33.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.867 "strip_size_kb": 64, 00:15:33.867 "state": "configuring", 00:15:33.867 "raid_level": "raid5f", 00:15:33.867 "superblock": false, 00:15:33.867 "num_base_bdevs": 4, 00:15:33.867 "num_base_bdevs_discovered": 3, 00:15:33.867 "num_base_bdevs_operational": 4, 00:15:33.867 "base_bdevs_list": [ 00:15:33.867 { 00:15:33.867 "name": "BaseBdev1", 00:15:33.867 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:33.867 "is_configured": true, 00:15:33.867 "data_offset": 0, 00:15:33.867 "data_size": 65536 00:15:33.867 }, 00:15:33.867 { 00:15:33.867 "name": null, 00:15:33.867 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:33.867 "is_configured": false, 00:15:33.867 "data_offset": 0, 00:15:33.867 "data_size": 65536 00:15:33.867 }, 00:15:33.867 { 00:15:33.867 "name": "BaseBdev3", 00:15:33.867 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:33.867 "is_configured": true, 00:15:33.867 "data_offset": 0, 00:15:33.867 "data_size": 65536 00:15:33.867 }, 00:15:33.867 { 00:15:33.867 "name": "BaseBdev4", 00:15:33.867 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:33.867 "is_configured": true, 00:15:33.867 "data_offset": 0, 00:15:33.867 "data_size": 65536 00:15:33.867 } 00:15:33.867 ] 00:15:33.867 }' 00:15:33.867 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.867 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.127 10:43:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.127 [2024-11-18 10:43:59.961893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.387 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.388 "name": "Existed_Raid", 00:15:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.388 "strip_size_kb": 64, 00:15:34.388 "state": "configuring", 00:15:34.388 "raid_level": "raid5f", 00:15:34.388 "superblock": false, 00:15:34.388 "num_base_bdevs": 4, 00:15:34.388 "num_base_bdevs_discovered": 2, 00:15:34.388 "num_base_bdevs_operational": 4, 00:15:34.388 "base_bdevs_list": [ 00:15:34.388 { 00:15:34.388 "name": null, 00:15:34.388 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:34.388 "is_configured": false, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 65536 00:15:34.388 }, 00:15:34.388 { 00:15:34.388 "name": null, 00:15:34.388 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:34.388 "is_configured": false, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 65536 00:15:34.388 }, 00:15:34.388 { 00:15:34.388 "name": "BaseBdev3", 00:15:34.388 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:34.388 "is_configured": true, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 65536 00:15:34.388 }, 00:15:34.388 { 00:15:34.388 "name": "BaseBdev4", 00:15:34.388 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:34.388 "is_configured": true, 00:15:34.388 "data_offset": 0, 00:15:34.388 "data_size": 65536 00:15:34.388 } 00:15:34.388 ] 00:15:34.388 }' 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.388 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.647 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 [2024-11-18 10:44:00.526330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.907 "name": "Existed_Raid", 00:15:34.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.907 "strip_size_kb": 64, 00:15:34.907 "state": "configuring", 00:15:34.907 "raid_level": "raid5f", 00:15:34.907 "superblock": false, 00:15:34.907 "num_base_bdevs": 4, 00:15:34.907 "num_base_bdevs_discovered": 3, 00:15:34.907 "num_base_bdevs_operational": 4, 00:15:34.907 "base_bdevs_list": [ 00:15:34.907 { 00:15:34.907 "name": null, 00:15:34.907 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:34.907 "is_configured": false, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 65536 00:15:34.907 }, 00:15:34.907 { 00:15:34.907 "name": "BaseBdev2", 00:15:34.907 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:34.907 "is_configured": true, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 65536 00:15:34.907 }, 00:15:34.907 { 00:15:34.907 "name": "BaseBdev3", 00:15:34.907 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:34.907 "is_configured": true, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 65536 00:15:34.907 }, 00:15:34.907 { 00:15:34.907 "name": "BaseBdev4", 00:15:34.907 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:34.907 "is_configured": true, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 65536 00:15:34.907 } 00:15:34.907 ] 00:15:34.907 }' 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.907 10:44:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.167 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.167 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.167 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.167 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:35.167 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 887f15ce-8234-4239-a7c1-e31c58099991 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.428 [2024-11-18 10:44:01.144093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:35.428 [2024-11-18 10:44:01.144225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:35.428 [2024-11-18 10:44:01.144252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:35.428 [2024-11-18 10:44:01.144530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:35.428 [2024-11-18 10:44:01.151383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:35.428 [2024-11-18 10:44:01.151440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:35.428 [2024-11-18 10:44:01.151703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.428 NewBaseBdev 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.428 [ 00:15:35.428 { 00:15:35.428 "name": "NewBaseBdev", 00:15:35.428 "aliases": [ 00:15:35.428 "887f15ce-8234-4239-a7c1-e31c58099991" 00:15:35.428 ], 00:15:35.428 "product_name": "Malloc disk", 00:15:35.428 "block_size": 512, 00:15:35.428 "num_blocks": 65536, 00:15:35.428 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:35.428 "assigned_rate_limits": { 00:15:35.428 "rw_ios_per_sec": 0, 00:15:35.428 "rw_mbytes_per_sec": 0, 00:15:35.428 "r_mbytes_per_sec": 0, 00:15:35.428 "w_mbytes_per_sec": 0 00:15:35.428 }, 00:15:35.428 "claimed": true, 00:15:35.428 "claim_type": "exclusive_write", 00:15:35.428 "zoned": false, 00:15:35.428 "supported_io_types": { 00:15:35.428 "read": true, 00:15:35.428 "write": true, 00:15:35.428 "unmap": true, 00:15:35.428 "flush": true, 00:15:35.428 "reset": true, 00:15:35.428 "nvme_admin": false, 00:15:35.428 "nvme_io": false, 00:15:35.428 "nvme_io_md": false, 00:15:35.428 "write_zeroes": true, 00:15:35.428 "zcopy": true, 00:15:35.428 "get_zone_info": false, 00:15:35.428 "zone_management": false, 00:15:35.428 "zone_append": false, 00:15:35.428 "compare": false, 00:15:35.428 "compare_and_write": false, 00:15:35.428 "abort": true, 00:15:35.428 "seek_hole": false, 00:15:35.428 "seek_data": false, 00:15:35.428 "copy": true, 00:15:35.428 "nvme_iov_md": false 00:15:35.428 }, 00:15:35.428 "memory_domains": [ 00:15:35.428 { 00:15:35.428 "dma_device_id": "system", 00:15:35.428 "dma_device_type": 1 00:15:35.428 }, 00:15:35.428 { 00:15:35.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.428 "dma_device_type": 2 00:15:35.428 } 00:15:35.428 ], 00:15:35.428 "driver_specific": {} 00:15:35.428 } 00:15:35.428 ] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.428 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.429 "name": "Existed_Raid", 00:15:35.429 "uuid": "beefca36-e2ae-4958-9e79-c60de04c3673", 00:15:35.429 "strip_size_kb": 64, 00:15:35.429 "state": "online", 00:15:35.429 "raid_level": "raid5f", 00:15:35.429 "superblock": false, 00:15:35.429 "num_base_bdevs": 4, 00:15:35.429 "num_base_bdevs_discovered": 4, 00:15:35.429 "num_base_bdevs_operational": 4, 00:15:35.429 "base_bdevs_list": [ 00:15:35.429 { 00:15:35.429 "name": "NewBaseBdev", 00:15:35.429 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:35.429 "is_configured": true, 00:15:35.429 "data_offset": 0, 00:15:35.429 "data_size": 65536 00:15:35.429 }, 00:15:35.429 { 00:15:35.429 "name": "BaseBdev2", 00:15:35.429 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:35.429 "is_configured": true, 00:15:35.429 "data_offset": 0, 00:15:35.429 "data_size": 65536 00:15:35.429 }, 00:15:35.429 { 00:15:35.429 "name": "BaseBdev3", 00:15:35.429 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:35.429 "is_configured": true, 00:15:35.429 "data_offset": 0, 00:15:35.429 "data_size": 65536 00:15:35.429 }, 00:15:35.429 { 00:15:35.429 "name": "BaseBdev4", 00:15:35.429 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:35.429 "is_configured": true, 00:15:35.429 "data_offset": 0, 00:15:35.429 "data_size": 65536 00:15:35.429 } 00:15:35.429 ] 00:15:35.429 }' 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.429 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.015 [2024-11-18 10:44:01.643467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.015 "name": "Existed_Raid", 00:15:36.015 "aliases": [ 00:15:36.015 "beefca36-e2ae-4958-9e79-c60de04c3673" 00:15:36.015 ], 00:15:36.015 "product_name": "Raid Volume", 00:15:36.015 "block_size": 512, 00:15:36.015 "num_blocks": 196608, 00:15:36.015 "uuid": "beefca36-e2ae-4958-9e79-c60de04c3673", 00:15:36.015 "assigned_rate_limits": { 00:15:36.015 "rw_ios_per_sec": 0, 00:15:36.015 "rw_mbytes_per_sec": 0, 00:15:36.015 "r_mbytes_per_sec": 0, 00:15:36.015 "w_mbytes_per_sec": 0 00:15:36.015 }, 00:15:36.015 "claimed": false, 00:15:36.015 "zoned": false, 00:15:36.015 "supported_io_types": { 00:15:36.015 "read": true, 00:15:36.015 "write": true, 00:15:36.015 "unmap": false, 00:15:36.015 "flush": false, 00:15:36.015 "reset": true, 00:15:36.015 "nvme_admin": false, 00:15:36.015 "nvme_io": false, 00:15:36.015 "nvme_io_md": false, 00:15:36.015 "write_zeroes": true, 00:15:36.015 "zcopy": false, 00:15:36.015 "get_zone_info": false, 00:15:36.015 "zone_management": false, 00:15:36.015 "zone_append": false, 00:15:36.015 "compare": false, 00:15:36.015 "compare_and_write": false, 00:15:36.015 "abort": false, 00:15:36.015 "seek_hole": false, 00:15:36.015 "seek_data": false, 00:15:36.015 "copy": false, 00:15:36.015 "nvme_iov_md": false 00:15:36.015 }, 00:15:36.015 "driver_specific": { 00:15:36.015 "raid": { 00:15:36.015 "uuid": "beefca36-e2ae-4958-9e79-c60de04c3673", 00:15:36.015 "strip_size_kb": 64, 00:15:36.015 "state": "online", 00:15:36.015 "raid_level": "raid5f", 00:15:36.015 "superblock": false, 00:15:36.015 "num_base_bdevs": 4, 00:15:36.015 "num_base_bdevs_discovered": 4, 00:15:36.015 "num_base_bdevs_operational": 4, 00:15:36.015 "base_bdevs_list": [ 00:15:36.015 { 00:15:36.015 "name": "NewBaseBdev", 00:15:36.015 "uuid": "887f15ce-8234-4239-a7c1-e31c58099991", 00:15:36.015 "is_configured": true, 00:15:36.015 "data_offset": 0, 00:15:36.015 "data_size": 65536 00:15:36.015 }, 00:15:36.015 { 00:15:36.015 "name": "BaseBdev2", 00:15:36.015 "uuid": "d9d7583e-3e0a-438a-8228-f7fdad5e782c", 00:15:36.015 "is_configured": true, 00:15:36.015 "data_offset": 0, 00:15:36.015 "data_size": 65536 00:15:36.015 }, 00:15:36.015 { 00:15:36.015 "name": "BaseBdev3", 00:15:36.015 "uuid": "20382660-3fd1-4a1b-82af-c8e0019ccee9", 00:15:36.015 "is_configured": true, 00:15:36.015 "data_offset": 0, 00:15:36.015 "data_size": 65536 00:15:36.015 }, 00:15:36.015 { 00:15:36.015 "name": "BaseBdev4", 00:15:36.015 "uuid": "55f41d40-bed8-496f-9f12-f9dce35ff052", 00:15:36.015 "is_configured": true, 00:15:36.015 "data_offset": 0, 00:15:36.015 "data_size": 65536 00:15:36.015 } 00:15:36.015 ] 00:15:36.015 } 00:15:36.015 } 00:15:36.015 }' 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.015 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:36.015 BaseBdev2 00:15:36.015 BaseBdev3 00:15:36.015 BaseBdev4' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.016 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.287 [2024-11-18 10:44:01.991278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.287 [2024-11-18 10:44:01.991303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.287 [2024-11-18 10:44:01.991360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.287 [2024-11-18 10:44:01.991628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.287 [2024-11-18 10:44:01.991638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82557 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82557 ']' 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82557 00:15:36.287 10:44:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82557 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82557' 00:15:36.287 killing process with pid 82557 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82557 00:15:36.287 [2024-11-18 10:44:02.030904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.287 10:44:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82557 00:15:36.547 [2024-11-18 10:44:02.398761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:37.930 00:15:37.930 real 0m11.567s 00:15:37.930 user 0m18.446s 00:15:37.930 sys 0m2.218s 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.930 ************************************ 00:15:37.930 END TEST raid5f_state_function_test 00:15:37.930 ************************************ 00:15:37.930 10:44:03 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:37.930 10:44:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:37.930 10:44:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.930 10:44:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.930 ************************************ 00:15:37.930 START TEST raid5f_state_function_test_sb 00:15:37.930 ************************************ 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83228 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83228' 00:15:37.930 Process raid pid: 83228 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83228 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83228 ']' 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.930 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.930 [2024-11-18 10:44:03.615431] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:37.930 [2024-11-18 10:44:03.615621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.930 [2024-11-18 10:44:03.796320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.190 [2024-11-18 10:44:03.904142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.450 [2024-11-18 10:44:04.111384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.450 [2024-11-18 10:44:04.111417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.710 [2024-11-18 10:44:04.439861] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.710 [2024-11-18 10:44:04.439917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.710 [2024-11-18 10:44:04.439931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.710 [2024-11-18 10:44:04.439941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.710 [2024-11-18 10:44:04.439947] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.710 [2024-11-18 10:44:04.439955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.710 [2024-11-18 10:44:04.439961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:38.710 [2024-11-18 10:44:04.439969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.710 "name": "Existed_Raid", 00:15:38.710 "uuid": "ecc2d1ef-3a9d-4a0e-90a4-85422fbbac0d", 00:15:38.710 "strip_size_kb": 64, 00:15:38.710 "state": "configuring", 00:15:38.710 "raid_level": "raid5f", 00:15:38.710 "superblock": true, 00:15:38.710 "num_base_bdevs": 4, 00:15:38.710 "num_base_bdevs_discovered": 0, 00:15:38.710 "num_base_bdevs_operational": 4, 00:15:38.710 "base_bdevs_list": [ 00:15:38.710 { 00:15:38.710 "name": "BaseBdev1", 00:15:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.710 "is_configured": false, 00:15:38.710 "data_offset": 0, 00:15:38.710 "data_size": 0 00:15:38.710 }, 00:15:38.710 { 00:15:38.710 "name": "BaseBdev2", 00:15:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.710 "is_configured": false, 00:15:38.710 "data_offset": 0, 00:15:38.710 "data_size": 0 00:15:38.710 }, 00:15:38.710 { 00:15:38.710 "name": "BaseBdev3", 00:15:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.710 "is_configured": false, 00:15:38.710 "data_offset": 0, 00:15:38.710 "data_size": 0 00:15:38.710 }, 00:15:38.710 { 00:15:38.710 "name": "BaseBdev4", 00:15:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.710 "is_configured": false, 00:15:38.710 "data_offset": 0, 00:15:38.710 "data_size": 0 00:15:38.710 } 00:15:38.710 ] 00:15:38.710 }' 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.710 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.970 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.970 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.970 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.970 [2024-11-18 10:44:04.851294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.970 [2024-11-18 10:44:04.851381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.230 [2024-11-18 10:44:04.863333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.230 [2024-11-18 10:44:04.863414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.230 [2024-11-18 10:44:04.863440] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.230 [2024-11-18 10:44:04.863461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.230 [2024-11-18 10:44:04.863479] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.230 [2024-11-18 10:44:04.863499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.230 [2024-11-18 10:44:04.863516] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:39.230 [2024-11-18 10:44:04.863536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.230 [2024-11-18 10:44:04.910662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.230 BaseBdev1 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.230 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.231 [ 00:15:39.231 { 00:15:39.231 "name": "BaseBdev1", 00:15:39.231 "aliases": [ 00:15:39.231 "30a496b2-8eec-4e57-b9d2-a798b39d4c7e" 00:15:39.231 ], 00:15:39.231 "product_name": "Malloc disk", 00:15:39.231 "block_size": 512, 00:15:39.231 "num_blocks": 65536, 00:15:39.231 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:39.231 "assigned_rate_limits": { 00:15:39.231 "rw_ios_per_sec": 0, 00:15:39.231 "rw_mbytes_per_sec": 0, 00:15:39.231 "r_mbytes_per_sec": 0, 00:15:39.231 "w_mbytes_per_sec": 0 00:15:39.231 }, 00:15:39.231 "claimed": true, 00:15:39.231 "claim_type": "exclusive_write", 00:15:39.231 "zoned": false, 00:15:39.231 "supported_io_types": { 00:15:39.231 "read": true, 00:15:39.231 "write": true, 00:15:39.231 "unmap": true, 00:15:39.231 "flush": true, 00:15:39.231 "reset": true, 00:15:39.231 "nvme_admin": false, 00:15:39.231 "nvme_io": false, 00:15:39.231 "nvme_io_md": false, 00:15:39.231 "write_zeroes": true, 00:15:39.231 "zcopy": true, 00:15:39.231 "get_zone_info": false, 00:15:39.231 "zone_management": false, 00:15:39.231 "zone_append": false, 00:15:39.231 "compare": false, 00:15:39.231 "compare_and_write": false, 00:15:39.231 "abort": true, 00:15:39.231 "seek_hole": false, 00:15:39.231 "seek_data": false, 00:15:39.231 "copy": true, 00:15:39.231 "nvme_iov_md": false 00:15:39.231 }, 00:15:39.231 "memory_domains": [ 00:15:39.231 { 00:15:39.231 "dma_device_id": "system", 00:15:39.231 "dma_device_type": 1 00:15:39.231 }, 00:15:39.231 { 00:15:39.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.231 "dma_device_type": 2 00:15:39.231 } 00:15:39.231 ], 00:15:39.231 "driver_specific": {} 00:15:39.231 } 00:15:39.231 ] 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.231 "name": "Existed_Raid", 00:15:39.231 "uuid": "65ba5452-1fae-4ef3-8b22-72c5a102b7b1", 00:15:39.231 "strip_size_kb": 64, 00:15:39.231 "state": "configuring", 00:15:39.231 "raid_level": "raid5f", 00:15:39.231 "superblock": true, 00:15:39.231 "num_base_bdevs": 4, 00:15:39.231 "num_base_bdevs_discovered": 1, 00:15:39.231 "num_base_bdevs_operational": 4, 00:15:39.231 "base_bdevs_list": [ 00:15:39.231 { 00:15:39.231 "name": "BaseBdev1", 00:15:39.231 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:39.231 "is_configured": true, 00:15:39.231 "data_offset": 2048, 00:15:39.231 "data_size": 63488 00:15:39.231 }, 00:15:39.231 { 00:15:39.231 "name": "BaseBdev2", 00:15:39.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.231 "is_configured": false, 00:15:39.231 "data_offset": 0, 00:15:39.231 "data_size": 0 00:15:39.231 }, 00:15:39.231 { 00:15:39.231 "name": "BaseBdev3", 00:15:39.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.231 "is_configured": false, 00:15:39.231 "data_offset": 0, 00:15:39.231 "data_size": 0 00:15:39.231 }, 00:15:39.231 { 00:15:39.231 "name": "BaseBdev4", 00:15:39.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.231 "is_configured": false, 00:15:39.231 "data_offset": 0, 00:15:39.231 "data_size": 0 00:15:39.231 } 00:15:39.231 ] 00:15:39.231 }' 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.231 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.801 [2024-11-18 10:44:05.397868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.801 [2024-11-18 10:44:05.397904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.801 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.801 [2024-11-18 10:44:05.409916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.801 [2024-11-18 10:44:05.411600] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.801 [2024-11-18 10:44:05.411685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.802 [2024-11-18 10:44:05.411698] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.802 [2024-11-18 10:44:05.411709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.802 [2024-11-18 10:44:05.411716] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:39.802 [2024-11-18 10:44:05.411723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.802 "name": "Existed_Raid", 00:15:39.802 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:39.802 "strip_size_kb": 64, 00:15:39.802 "state": "configuring", 00:15:39.802 "raid_level": "raid5f", 00:15:39.802 "superblock": true, 00:15:39.802 "num_base_bdevs": 4, 00:15:39.802 "num_base_bdevs_discovered": 1, 00:15:39.802 "num_base_bdevs_operational": 4, 00:15:39.802 "base_bdevs_list": [ 00:15:39.802 { 00:15:39.802 "name": "BaseBdev1", 00:15:39.802 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:39.802 "is_configured": true, 00:15:39.802 "data_offset": 2048, 00:15:39.802 "data_size": 63488 00:15:39.802 }, 00:15:39.802 { 00:15:39.802 "name": "BaseBdev2", 00:15:39.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.802 "is_configured": false, 00:15:39.802 "data_offset": 0, 00:15:39.802 "data_size": 0 00:15:39.802 }, 00:15:39.802 { 00:15:39.802 "name": "BaseBdev3", 00:15:39.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.802 "is_configured": false, 00:15:39.802 "data_offset": 0, 00:15:39.802 "data_size": 0 00:15:39.802 }, 00:15:39.802 { 00:15:39.802 "name": "BaseBdev4", 00:15:39.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.802 "is_configured": false, 00:15:39.802 "data_offset": 0, 00:15:39.802 "data_size": 0 00:15:39.802 } 00:15:39.802 ] 00:15:39.802 }' 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.802 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.061 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:40.061 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.061 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.061 [2024-11-18 10:44:05.943718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.061 BaseBdev2 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.321 [ 00:15:40.321 { 00:15:40.321 "name": "BaseBdev2", 00:15:40.321 "aliases": [ 00:15:40.321 "d33672fa-240d-4c23-af9d-4c24f715c175" 00:15:40.321 ], 00:15:40.321 "product_name": "Malloc disk", 00:15:40.321 "block_size": 512, 00:15:40.321 "num_blocks": 65536, 00:15:40.321 "uuid": "d33672fa-240d-4c23-af9d-4c24f715c175", 00:15:40.321 "assigned_rate_limits": { 00:15:40.321 "rw_ios_per_sec": 0, 00:15:40.321 "rw_mbytes_per_sec": 0, 00:15:40.321 "r_mbytes_per_sec": 0, 00:15:40.321 "w_mbytes_per_sec": 0 00:15:40.321 }, 00:15:40.321 "claimed": true, 00:15:40.321 "claim_type": "exclusive_write", 00:15:40.321 "zoned": false, 00:15:40.321 "supported_io_types": { 00:15:40.321 "read": true, 00:15:40.321 "write": true, 00:15:40.321 "unmap": true, 00:15:40.321 "flush": true, 00:15:40.321 "reset": true, 00:15:40.321 "nvme_admin": false, 00:15:40.321 "nvme_io": false, 00:15:40.321 "nvme_io_md": false, 00:15:40.321 "write_zeroes": true, 00:15:40.321 "zcopy": true, 00:15:40.321 "get_zone_info": false, 00:15:40.321 "zone_management": false, 00:15:40.321 "zone_append": false, 00:15:40.321 "compare": false, 00:15:40.321 "compare_and_write": false, 00:15:40.321 "abort": true, 00:15:40.321 "seek_hole": false, 00:15:40.321 "seek_data": false, 00:15:40.321 "copy": true, 00:15:40.321 "nvme_iov_md": false 00:15:40.321 }, 00:15:40.321 "memory_domains": [ 00:15:40.321 { 00:15:40.321 "dma_device_id": "system", 00:15:40.321 "dma_device_type": 1 00:15:40.321 }, 00:15:40.321 { 00:15:40.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.321 "dma_device_type": 2 00:15:40.321 } 00:15:40.321 ], 00:15:40.321 "driver_specific": {} 00:15:40.321 } 00:15:40.321 ] 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.321 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.322 10:44:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.322 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.322 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.322 "name": "Existed_Raid", 00:15:40.322 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:40.322 "strip_size_kb": 64, 00:15:40.322 "state": "configuring", 00:15:40.322 "raid_level": "raid5f", 00:15:40.322 "superblock": true, 00:15:40.322 "num_base_bdevs": 4, 00:15:40.322 "num_base_bdevs_discovered": 2, 00:15:40.322 "num_base_bdevs_operational": 4, 00:15:40.322 "base_bdevs_list": [ 00:15:40.322 { 00:15:40.322 "name": "BaseBdev1", 00:15:40.322 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:40.322 "is_configured": true, 00:15:40.322 "data_offset": 2048, 00:15:40.322 "data_size": 63488 00:15:40.322 }, 00:15:40.322 { 00:15:40.322 "name": "BaseBdev2", 00:15:40.322 "uuid": "d33672fa-240d-4c23-af9d-4c24f715c175", 00:15:40.322 "is_configured": true, 00:15:40.322 "data_offset": 2048, 00:15:40.322 "data_size": 63488 00:15:40.322 }, 00:15:40.322 { 00:15:40.322 "name": "BaseBdev3", 00:15:40.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.322 "is_configured": false, 00:15:40.322 "data_offset": 0, 00:15:40.322 "data_size": 0 00:15:40.322 }, 00:15:40.322 { 00:15:40.322 "name": "BaseBdev4", 00:15:40.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.322 "is_configured": false, 00:15:40.322 "data_offset": 0, 00:15:40.322 "data_size": 0 00:15:40.322 } 00:15:40.322 ] 00:15:40.322 }' 00:15:40.322 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.322 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.582 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:40.582 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.582 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.841 [2024-11-18 10:44:06.502276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.841 BaseBdev3 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.841 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.842 [ 00:15:40.842 { 00:15:40.842 "name": "BaseBdev3", 00:15:40.842 "aliases": [ 00:15:40.842 "5e34d33a-f1cc-4730-8eb7-801d22dc19e0" 00:15:40.842 ], 00:15:40.842 "product_name": "Malloc disk", 00:15:40.842 "block_size": 512, 00:15:40.842 "num_blocks": 65536, 00:15:40.842 "uuid": "5e34d33a-f1cc-4730-8eb7-801d22dc19e0", 00:15:40.842 "assigned_rate_limits": { 00:15:40.842 "rw_ios_per_sec": 0, 00:15:40.842 "rw_mbytes_per_sec": 0, 00:15:40.842 "r_mbytes_per_sec": 0, 00:15:40.842 "w_mbytes_per_sec": 0 00:15:40.842 }, 00:15:40.842 "claimed": true, 00:15:40.842 "claim_type": "exclusive_write", 00:15:40.842 "zoned": false, 00:15:40.842 "supported_io_types": { 00:15:40.842 "read": true, 00:15:40.842 "write": true, 00:15:40.842 "unmap": true, 00:15:40.842 "flush": true, 00:15:40.842 "reset": true, 00:15:40.842 "nvme_admin": false, 00:15:40.842 "nvme_io": false, 00:15:40.842 "nvme_io_md": false, 00:15:40.842 "write_zeroes": true, 00:15:40.842 "zcopy": true, 00:15:40.842 "get_zone_info": false, 00:15:40.842 "zone_management": false, 00:15:40.842 "zone_append": false, 00:15:40.842 "compare": false, 00:15:40.842 "compare_and_write": false, 00:15:40.842 "abort": true, 00:15:40.842 "seek_hole": false, 00:15:40.842 "seek_data": false, 00:15:40.842 "copy": true, 00:15:40.842 "nvme_iov_md": false 00:15:40.842 }, 00:15:40.842 "memory_domains": [ 00:15:40.842 { 00:15:40.842 "dma_device_id": "system", 00:15:40.842 "dma_device_type": 1 00:15:40.842 }, 00:15:40.842 { 00:15:40.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.842 "dma_device_type": 2 00:15:40.842 } 00:15:40.842 ], 00:15:40.842 "driver_specific": {} 00:15:40.842 } 00:15:40.842 ] 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.842 "name": "Existed_Raid", 00:15:40.842 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:40.842 "strip_size_kb": 64, 00:15:40.842 "state": "configuring", 00:15:40.842 "raid_level": "raid5f", 00:15:40.842 "superblock": true, 00:15:40.842 "num_base_bdevs": 4, 00:15:40.842 "num_base_bdevs_discovered": 3, 00:15:40.842 "num_base_bdevs_operational": 4, 00:15:40.842 "base_bdevs_list": [ 00:15:40.842 { 00:15:40.842 "name": "BaseBdev1", 00:15:40.842 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:40.842 "is_configured": true, 00:15:40.842 "data_offset": 2048, 00:15:40.842 "data_size": 63488 00:15:40.842 }, 00:15:40.842 { 00:15:40.842 "name": "BaseBdev2", 00:15:40.842 "uuid": "d33672fa-240d-4c23-af9d-4c24f715c175", 00:15:40.842 "is_configured": true, 00:15:40.842 "data_offset": 2048, 00:15:40.842 "data_size": 63488 00:15:40.842 }, 00:15:40.842 { 00:15:40.842 "name": "BaseBdev3", 00:15:40.842 "uuid": "5e34d33a-f1cc-4730-8eb7-801d22dc19e0", 00:15:40.842 "is_configured": true, 00:15:40.842 "data_offset": 2048, 00:15:40.842 "data_size": 63488 00:15:40.842 }, 00:15:40.842 { 00:15:40.842 "name": "BaseBdev4", 00:15:40.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.842 "is_configured": false, 00:15:40.842 "data_offset": 0, 00:15:40.842 "data_size": 0 00:15:40.842 } 00:15:40.842 ] 00:15:40.842 }' 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.842 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.410 [2024-11-18 10:44:07.050162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.410 [2024-11-18 10:44:07.050507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:41.410 [2024-11-18 10:44:07.050561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:41.410 [2024-11-18 10:44:07.050830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:41.410 BaseBdev4 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.410 [2024-11-18 10:44:07.057461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:41.410 [2024-11-18 10:44:07.057528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:41.410 [2024-11-18 10:44:07.057690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.410 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.411 [ 00:15:41.411 { 00:15:41.411 "name": "BaseBdev4", 00:15:41.411 "aliases": [ 00:15:41.411 "edd46a09-edb0-4bcb-9890-4f371f3b6218" 00:15:41.411 ], 00:15:41.411 "product_name": "Malloc disk", 00:15:41.411 "block_size": 512, 00:15:41.411 "num_blocks": 65536, 00:15:41.411 "uuid": "edd46a09-edb0-4bcb-9890-4f371f3b6218", 00:15:41.411 "assigned_rate_limits": { 00:15:41.411 "rw_ios_per_sec": 0, 00:15:41.411 "rw_mbytes_per_sec": 0, 00:15:41.411 "r_mbytes_per_sec": 0, 00:15:41.411 "w_mbytes_per_sec": 0 00:15:41.411 }, 00:15:41.411 "claimed": true, 00:15:41.411 "claim_type": "exclusive_write", 00:15:41.411 "zoned": false, 00:15:41.411 "supported_io_types": { 00:15:41.411 "read": true, 00:15:41.411 "write": true, 00:15:41.411 "unmap": true, 00:15:41.411 "flush": true, 00:15:41.411 "reset": true, 00:15:41.411 "nvme_admin": false, 00:15:41.411 "nvme_io": false, 00:15:41.411 "nvme_io_md": false, 00:15:41.411 "write_zeroes": true, 00:15:41.411 "zcopy": true, 00:15:41.411 "get_zone_info": false, 00:15:41.411 "zone_management": false, 00:15:41.411 "zone_append": false, 00:15:41.411 "compare": false, 00:15:41.411 "compare_and_write": false, 00:15:41.411 "abort": true, 00:15:41.411 "seek_hole": false, 00:15:41.411 "seek_data": false, 00:15:41.411 "copy": true, 00:15:41.411 "nvme_iov_md": false 00:15:41.411 }, 00:15:41.411 "memory_domains": [ 00:15:41.411 { 00:15:41.411 "dma_device_id": "system", 00:15:41.411 "dma_device_type": 1 00:15:41.411 }, 00:15:41.411 { 00:15:41.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.411 "dma_device_type": 2 00:15:41.411 } 00:15:41.411 ], 00:15:41.411 "driver_specific": {} 00:15:41.411 } 00:15:41.411 ] 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.411 "name": "Existed_Raid", 00:15:41.411 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:41.411 "strip_size_kb": 64, 00:15:41.411 "state": "online", 00:15:41.411 "raid_level": "raid5f", 00:15:41.411 "superblock": true, 00:15:41.411 "num_base_bdevs": 4, 00:15:41.411 "num_base_bdevs_discovered": 4, 00:15:41.411 "num_base_bdevs_operational": 4, 00:15:41.411 "base_bdevs_list": [ 00:15:41.411 { 00:15:41.411 "name": "BaseBdev1", 00:15:41.411 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:41.411 "is_configured": true, 00:15:41.411 "data_offset": 2048, 00:15:41.411 "data_size": 63488 00:15:41.411 }, 00:15:41.411 { 00:15:41.411 "name": "BaseBdev2", 00:15:41.411 "uuid": "d33672fa-240d-4c23-af9d-4c24f715c175", 00:15:41.411 "is_configured": true, 00:15:41.411 "data_offset": 2048, 00:15:41.411 "data_size": 63488 00:15:41.411 }, 00:15:41.411 { 00:15:41.411 "name": "BaseBdev3", 00:15:41.411 "uuid": "5e34d33a-f1cc-4730-8eb7-801d22dc19e0", 00:15:41.411 "is_configured": true, 00:15:41.411 "data_offset": 2048, 00:15:41.411 "data_size": 63488 00:15:41.411 }, 00:15:41.411 { 00:15:41.411 "name": "BaseBdev4", 00:15:41.411 "uuid": "edd46a09-edb0-4bcb-9890-4f371f3b6218", 00:15:41.411 "is_configured": true, 00:15:41.411 "data_offset": 2048, 00:15:41.411 "data_size": 63488 00:15:41.411 } 00:15:41.411 ] 00:15:41.411 }' 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.411 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.671 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.671 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.930 [2024-11-18 10:44:07.568481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.930 "name": "Existed_Raid", 00:15:41.930 "aliases": [ 00:15:41.930 "431a4715-0de6-4168-b036-7b89dd704145" 00:15:41.930 ], 00:15:41.930 "product_name": "Raid Volume", 00:15:41.930 "block_size": 512, 00:15:41.930 "num_blocks": 190464, 00:15:41.930 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:41.930 "assigned_rate_limits": { 00:15:41.930 "rw_ios_per_sec": 0, 00:15:41.930 "rw_mbytes_per_sec": 0, 00:15:41.930 "r_mbytes_per_sec": 0, 00:15:41.930 "w_mbytes_per_sec": 0 00:15:41.930 }, 00:15:41.930 "claimed": false, 00:15:41.930 "zoned": false, 00:15:41.930 "supported_io_types": { 00:15:41.930 "read": true, 00:15:41.930 "write": true, 00:15:41.930 "unmap": false, 00:15:41.930 "flush": false, 00:15:41.930 "reset": true, 00:15:41.930 "nvme_admin": false, 00:15:41.930 "nvme_io": false, 00:15:41.930 "nvme_io_md": false, 00:15:41.930 "write_zeroes": true, 00:15:41.930 "zcopy": false, 00:15:41.930 "get_zone_info": false, 00:15:41.930 "zone_management": false, 00:15:41.930 "zone_append": false, 00:15:41.930 "compare": false, 00:15:41.930 "compare_and_write": false, 00:15:41.930 "abort": false, 00:15:41.930 "seek_hole": false, 00:15:41.930 "seek_data": false, 00:15:41.930 "copy": false, 00:15:41.930 "nvme_iov_md": false 00:15:41.930 }, 00:15:41.930 "driver_specific": { 00:15:41.930 "raid": { 00:15:41.930 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:41.930 "strip_size_kb": 64, 00:15:41.930 "state": "online", 00:15:41.930 "raid_level": "raid5f", 00:15:41.930 "superblock": true, 00:15:41.930 "num_base_bdevs": 4, 00:15:41.930 "num_base_bdevs_discovered": 4, 00:15:41.930 "num_base_bdevs_operational": 4, 00:15:41.930 "base_bdevs_list": [ 00:15:41.930 { 00:15:41.930 "name": "BaseBdev1", 00:15:41.930 "uuid": "30a496b2-8eec-4e57-b9d2-a798b39d4c7e", 00:15:41.930 "is_configured": true, 00:15:41.930 "data_offset": 2048, 00:15:41.930 "data_size": 63488 00:15:41.930 }, 00:15:41.930 { 00:15:41.930 "name": "BaseBdev2", 00:15:41.930 "uuid": "d33672fa-240d-4c23-af9d-4c24f715c175", 00:15:41.930 "is_configured": true, 00:15:41.930 "data_offset": 2048, 00:15:41.930 "data_size": 63488 00:15:41.930 }, 00:15:41.930 { 00:15:41.930 "name": "BaseBdev3", 00:15:41.930 "uuid": "5e34d33a-f1cc-4730-8eb7-801d22dc19e0", 00:15:41.930 "is_configured": true, 00:15:41.930 "data_offset": 2048, 00:15:41.930 "data_size": 63488 00:15:41.930 }, 00:15:41.930 { 00:15:41.930 "name": "BaseBdev4", 00:15:41.930 "uuid": "edd46a09-edb0-4bcb-9890-4f371f3b6218", 00:15:41.930 "is_configured": true, 00:15:41.930 "data_offset": 2048, 00:15:41.930 "data_size": 63488 00:15:41.930 } 00:15:41.930 ] 00:15:41.930 } 00:15:41.930 } 00:15:41.930 }' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:41.930 BaseBdev2 00:15:41.930 BaseBdev3 00:15:41.930 BaseBdev4' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.930 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.931 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.253 [2024-11-18 10:44:07.895815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.253 10:44:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.253 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.253 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.253 "name": "Existed_Raid", 00:15:42.253 "uuid": "431a4715-0de6-4168-b036-7b89dd704145", 00:15:42.253 "strip_size_kb": 64, 00:15:42.253 "state": "online", 00:15:42.253 "raid_level": "raid5f", 00:15:42.253 "superblock": true, 00:15:42.253 "num_base_bdevs": 4, 00:15:42.253 "num_base_bdevs_discovered": 3, 00:15:42.253 "num_base_bdevs_operational": 3, 00:15:42.253 "base_bdevs_list": [ 00:15:42.253 { 00:15:42.253 "name": null, 00:15:42.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.253 "is_configured": false, 00:15:42.253 "data_offset": 0, 00:15:42.253 "data_size": 63488 00:15:42.253 }, 00:15:42.253 { 00:15:42.253 "name": "BaseBdev2", 00:15:42.253 "uuid": "d33672fa-240d-4c23-af9d-4c24f715c175", 00:15:42.253 "is_configured": true, 00:15:42.253 "data_offset": 2048, 00:15:42.253 "data_size": 63488 00:15:42.253 }, 00:15:42.253 { 00:15:42.253 "name": "BaseBdev3", 00:15:42.253 "uuid": "5e34d33a-f1cc-4730-8eb7-801d22dc19e0", 00:15:42.253 "is_configured": true, 00:15:42.253 "data_offset": 2048, 00:15:42.253 "data_size": 63488 00:15:42.253 }, 00:15:42.253 { 00:15:42.253 "name": "BaseBdev4", 00:15:42.253 "uuid": "edd46a09-edb0-4bcb-9890-4f371f3b6218", 00:15:42.253 "is_configured": true, 00:15:42.253 "data_offset": 2048, 00:15:42.253 "data_size": 63488 00:15:42.253 } 00:15:42.253 ] 00:15:42.253 }' 00:15:42.254 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.254 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.823 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.823 [2024-11-18 10:44:08.515204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.823 [2024-11-18 10:44:08.515415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.823 [2024-11-18 10:44:08.602582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.824 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.824 [2024-11-18 10:44:08.658500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.084 [2024-11-18 10:44:08.806076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:43.084 [2024-11-18 10:44:08.806124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.084 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.345 BaseBdev2 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.345 10:44:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.345 [ 00:15:43.345 { 00:15:43.345 "name": "BaseBdev2", 00:15:43.345 "aliases": [ 00:15:43.345 "1f6e4a8b-a877-4499-ae34-af8302aa74f5" 00:15:43.345 ], 00:15:43.345 "product_name": "Malloc disk", 00:15:43.345 "block_size": 512, 00:15:43.345 "num_blocks": 65536, 00:15:43.345 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:43.345 "assigned_rate_limits": { 00:15:43.345 "rw_ios_per_sec": 0, 00:15:43.345 "rw_mbytes_per_sec": 0, 00:15:43.345 "r_mbytes_per_sec": 0, 00:15:43.345 "w_mbytes_per_sec": 0 00:15:43.345 }, 00:15:43.345 "claimed": false, 00:15:43.345 "zoned": false, 00:15:43.345 "supported_io_types": { 00:15:43.345 "read": true, 00:15:43.345 "write": true, 00:15:43.345 "unmap": true, 00:15:43.345 "flush": true, 00:15:43.345 "reset": true, 00:15:43.345 "nvme_admin": false, 00:15:43.345 "nvme_io": false, 00:15:43.345 "nvme_io_md": false, 00:15:43.345 "write_zeroes": true, 00:15:43.345 "zcopy": true, 00:15:43.345 "get_zone_info": false, 00:15:43.345 "zone_management": false, 00:15:43.345 "zone_append": false, 00:15:43.345 "compare": false, 00:15:43.345 "compare_and_write": false, 00:15:43.345 "abort": true, 00:15:43.345 "seek_hole": false, 00:15:43.345 "seek_data": false, 00:15:43.345 "copy": true, 00:15:43.345 "nvme_iov_md": false 00:15:43.345 }, 00:15:43.345 "memory_domains": [ 00:15:43.345 { 00:15:43.345 "dma_device_id": "system", 00:15:43.345 "dma_device_type": 1 00:15:43.345 }, 00:15:43.345 { 00:15:43.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.345 "dma_device_type": 2 00:15:43.345 } 00:15:43.345 ], 00:15:43.345 "driver_specific": {} 00:15:43.345 } 00:15:43.345 ] 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.345 BaseBdev3 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.345 [ 00:15:43.345 { 00:15:43.345 "name": "BaseBdev3", 00:15:43.345 "aliases": [ 00:15:43.345 "cecd1061-e8b5-48a8-92ae-e228279c4eee" 00:15:43.345 ], 00:15:43.345 "product_name": "Malloc disk", 00:15:43.345 "block_size": 512, 00:15:43.345 "num_blocks": 65536, 00:15:43.345 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:43.345 "assigned_rate_limits": { 00:15:43.345 "rw_ios_per_sec": 0, 00:15:43.345 "rw_mbytes_per_sec": 0, 00:15:43.345 "r_mbytes_per_sec": 0, 00:15:43.345 "w_mbytes_per_sec": 0 00:15:43.345 }, 00:15:43.345 "claimed": false, 00:15:43.345 "zoned": false, 00:15:43.345 "supported_io_types": { 00:15:43.345 "read": true, 00:15:43.345 "write": true, 00:15:43.345 "unmap": true, 00:15:43.345 "flush": true, 00:15:43.345 "reset": true, 00:15:43.345 "nvme_admin": false, 00:15:43.345 "nvme_io": false, 00:15:43.345 "nvme_io_md": false, 00:15:43.345 "write_zeroes": true, 00:15:43.345 "zcopy": true, 00:15:43.345 "get_zone_info": false, 00:15:43.345 "zone_management": false, 00:15:43.345 "zone_append": false, 00:15:43.345 "compare": false, 00:15:43.345 "compare_and_write": false, 00:15:43.345 "abort": true, 00:15:43.345 "seek_hole": false, 00:15:43.345 "seek_data": false, 00:15:43.345 "copy": true, 00:15:43.345 "nvme_iov_md": false 00:15:43.345 }, 00:15:43.345 "memory_domains": [ 00:15:43.345 { 00:15:43.345 "dma_device_id": "system", 00:15:43.345 "dma_device_type": 1 00:15:43.345 }, 00:15:43.345 { 00:15:43.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.345 "dma_device_type": 2 00:15:43.345 } 00:15:43.345 ], 00:15:43.345 "driver_specific": {} 00:15:43.345 } 00:15:43.345 ] 00:15:43.345 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.346 BaseBdev4 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.346 [ 00:15:43.346 { 00:15:43.346 "name": "BaseBdev4", 00:15:43.346 "aliases": [ 00:15:43.346 "282fccfb-1154-4414-b8dd-9f1d03b3a0b1" 00:15:43.346 ], 00:15:43.346 "product_name": "Malloc disk", 00:15:43.346 "block_size": 512, 00:15:43.346 "num_blocks": 65536, 00:15:43.346 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:43.346 "assigned_rate_limits": { 00:15:43.346 "rw_ios_per_sec": 0, 00:15:43.346 "rw_mbytes_per_sec": 0, 00:15:43.346 "r_mbytes_per_sec": 0, 00:15:43.346 "w_mbytes_per_sec": 0 00:15:43.346 }, 00:15:43.346 "claimed": false, 00:15:43.346 "zoned": false, 00:15:43.346 "supported_io_types": { 00:15:43.346 "read": true, 00:15:43.346 "write": true, 00:15:43.346 "unmap": true, 00:15:43.346 "flush": true, 00:15:43.346 "reset": true, 00:15:43.346 "nvme_admin": false, 00:15:43.346 "nvme_io": false, 00:15:43.346 "nvme_io_md": false, 00:15:43.346 "write_zeroes": true, 00:15:43.346 "zcopy": true, 00:15:43.346 "get_zone_info": false, 00:15:43.346 "zone_management": false, 00:15:43.346 "zone_append": false, 00:15:43.346 "compare": false, 00:15:43.346 "compare_and_write": false, 00:15:43.346 "abort": true, 00:15:43.346 "seek_hole": false, 00:15:43.346 "seek_data": false, 00:15:43.346 "copy": true, 00:15:43.346 "nvme_iov_md": false 00:15:43.346 }, 00:15:43.346 "memory_domains": [ 00:15:43.346 { 00:15:43.346 "dma_device_id": "system", 00:15:43.346 "dma_device_type": 1 00:15:43.346 }, 00:15:43.346 { 00:15:43.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.346 "dma_device_type": 2 00:15:43.346 } 00:15:43.346 ], 00:15:43.346 "driver_specific": {} 00:15:43.346 } 00:15:43.346 ] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.346 [2024-11-18 10:44:09.178466] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.346 [2024-11-18 10:44:09.178603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.346 [2024-11-18 10:44:09.178645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.346 [2024-11-18 10:44:09.180468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.346 [2024-11-18 10:44:09.180564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.346 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.606 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.606 "name": "Existed_Raid", 00:15:43.606 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:43.606 "strip_size_kb": 64, 00:15:43.606 "state": "configuring", 00:15:43.606 "raid_level": "raid5f", 00:15:43.606 "superblock": true, 00:15:43.606 "num_base_bdevs": 4, 00:15:43.606 "num_base_bdevs_discovered": 3, 00:15:43.606 "num_base_bdevs_operational": 4, 00:15:43.606 "base_bdevs_list": [ 00:15:43.606 { 00:15:43.606 "name": "BaseBdev1", 00:15:43.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.606 "is_configured": false, 00:15:43.606 "data_offset": 0, 00:15:43.606 "data_size": 0 00:15:43.606 }, 00:15:43.606 { 00:15:43.606 "name": "BaseBdev2", 00:15:43.606 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:43.606 "is_configured": true, 00:15:43.606 "data_offset": 2048, 00:15:43.606 "data_size": 63488 00:15:43.606 }, 00:15:43.606 { 00:15:43.606 "name": "BaseBdev3", 00:15:43.606 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:43.606 "is_configured": true, 00:15:43.606 "data_offset": 2048, 00:15:43.606 "data_size": 63488 00:15:43.606 }, 00:15:43.606 { 00:15:43.606 "name": "BaseBdev4", 00:15:43.606 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:43.606 "is_configured": true, 00:15:43.606 "data_offset": 2048, 00:15:43.606 "data_size": 63488 00:15:43.606 } 00:15:43.606 ] 00:15:43.606 }' 00:15:43.606 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.606 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.866 [2024-11-18 10:44:09.673604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.866 "name": "Existed_Raid", 00:15:43.866 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:43.866 "strip_size_kb": 64, 00:15:43.866 "state": "configuring", 00:15:43.866 "raid_level": "raid5f", 00:15:43.866 "superblock": true, 00:15:43.866 "num_base_bdevs": 4, 00:15:43.866 "num_base_bdevs_discovered": 2, 00:15:43.866 "num_base_bdevs_operational": 4, 00:15:43.866 "base_bdevs_list": [ 00:15:43.866 { 00:15:43.866 "name": "BaseBdev1", 00:15:43.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.866 "is_configured": false, 00:15:43.866 "data_offset": 0, 00:15:43.866 "data_size": 0 00:15:43.866 }, 00:15:43.866 { 00:15:43.866 "name": null, 00:15:43.866 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:43.866 "is_configured": false, 00:15:43.866 "data_offset": 0, 00:15:43.866 "data_size": 63488 00:15:43.866 }, 00:15:43.866 { 00:15:43.866 "name": "BaseBdev3", 00:15:43.866 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:43.866 "is_configured": true, 00:15:43.866 "data_offset": 2048, 00:15:43.866 "data_size": 63488 00:15:43.866 }, 00:15:43.866 { 00:15:43.866 "name": "BaseBdev4", 00:15:43.866 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:43.866 "is_configured": true, 00:15:43.866 "data_offset": 2048, 00:15:43.866 "data_size": 63488 00:15:43.866 } 00:15:43.866 ] 00:15:43.866 }' 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.866 10:44:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.435 [2024-11-18 10:44:10.163426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.435 BaseBdev1 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.435 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.435 [ 00:15:44.435 { 00:15:44.435 "name": "BaseBdev1", 00:15:44.435 "aliases": [ 00:15:44.435 "65f3f973-a4d5-4e1b-8663-637c1146e7f4" 00:15:44.435 ], 00:15:44.435 "product_name": "Malloc disk", 00:15:44.435 "block_size": 512, 00:15:44.435 "num_blocks": 65536, 00:15:44.435 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:44.435 "assigned_rate_limits": { 00:15:44.435 "rw_ios_per_sec": 0, 00:15:44.435 "rw_mbytes_per_sec": 0, 00:15:44.435 "r_mbytes_per_sec": 0, 00:15:44.435 "w_mbytes_per_sec": 0 00:15:44.435 }, 00:15:44.435 "claimed": true, 00:15:44.435 "claim_type": "exclusive_write", 00:15:44.435 "zoned": false, 00:15:44.435 "supported_io_types": { 00:15:44.435 "read": true, 00:15:44.435 "write": true, 00:15:44.435 "unmap": true, 00:15:44.435 "flush": true, 00:15:44.435 "reset": true, 00:15:44.435 "nvme_admin": false, 00:15:44.435 "nvme_io": false, 00:15:44.435 "nvme_io_md": false, 00:15:44.435 "write_zeroes": true, 00:15:44.435 "zcopy": true, 00:15:44.435 "get_zone_info": false, 00:15:44.435 "zone_management": false, 00:15:44.435 "zone_append": false, 00:15:44.435 "compare": false, 00:15:44.435 "compare_and_write": false, 00:15:44.435 "abort": true, 00:15:44.435 "seek_hole": false, 00:15:44.435 "seek_data": false, 00:15:44.435 "copy": true, 00:15:44.435 "nvme_iov_md": false 00:15:44.435 }, 00:15:44.435 "memory_domains": [ 00:15:44.435 { 00:15:44.435 "dma_device_id": "system", 00:15:44.435 "dma_device_type": 1 00:15:44.435 }, 00:15:44.435 { 00:15:44.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.435 "dma_device_type": 2 00:15:44.435 } 00:15:44.435 ], 00:15:44.436 "driver_specific": {} 00:15:44.436 } 00:15:44.436 ] 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.436 "name": "Existed_Raid", 00:15:44.436 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:44.436 "strip_size_kb": 64, 00:15:44.436 "state": "configuring", 00:15:44.436 "raid_level": "raid5f", 00:15:44.436 "superblock": true, 00:15:44.436 "num_base_bdevs": 4, 00:15:44.436 "num_base_bdevs_discovered": 3, 00:15:44.436 "num_base_bdevs_operational": 4, 00:15:44.436 "base_bdevs_list": [ 00:15:44.436 { 00:15:44.436 "name": "BaseBdev1", 00:15:44.436 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:44.436 "is_configured": true, 00:15:44.436 "data_offset": 2048, 00:15:44.436 "data_size": 63488 00:15:44.436 }, 00:15:44.436 { 00:15:44.436 "name": null, 00:15:44.436 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:44.436 "is_configured": false, 00:15:44.436 "data_offset": 0, 00:15:44.436 "data_size": 63488 00:15:44.436 }, 00:15:44.436 { 00:15:44.436 "name": "BaseBdev3", 00:15:44.436 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:44.436 "is_configured": true, 00:15:44.436 "data_offset": 2048, 00:15:44.436 "data_size": 63488 00:15:44.436 }, 00:15:44.436 { 00:15:44.436 "name": "BaseBdev4", 00:15:44.436 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:44.436 "is_configured": true, 00:15:44.436 "data_offset": 2048, 00:15:44.436 "data_size": 63488 00:15:44.436 } 00:15:44.436 ] 00:15:44.436 }' 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.436 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 [2024-11-18 10:44:10.647143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.004 "name": "Existed_Raid", 00:15:45.004 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:45.004 "strip_size_kb": 64, 00:15:45.004 "state": "configuring", 00:15:45.004 "raid_level": "raid5f", 00:15:45.004 "superblock": true, 00:15:45.004 "num_base_bdevs": 4, 00:15:45.004 "num_base_bdevs_discovered": 2, 00:15:45.004 "num_base_bdevs_operational": 4, 00:15:45.004 "base_bdevs_list": [ 00:15:45.004 { 00:15:45.004 "name": "BaseBdev1", 00:15:45.004 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:45.004 "is_configured": true, 00:15:45.004 "data_offset": 2048, 00:15:45.004 "data_size": 63488 00:15:45.004 }, 00:15:45.004 { 00:15:45.004 "name": null, 00:15:45.004 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:45.004 "is_configured": false, 00:15:45.004 "data_offset": 0, 00:15:45.004 "data_size": 63488 00:15:45.004 }, 00:15:45.004 { 00:15:45.004 "name": null, 00:15:45.004 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:45.004 "is_configured": false, 00:15:45.004 "data_offset": 0, 00:15:45.004 "data_size": 63488 00:15:45.004 }, 00:15:45.004 { 00:15:45.004 "name": "BaseBdev4", 00:15:45.004 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:45.004 "is_configured": true, 00:15:45.004 "data_offset": 2048, 00:15:45.004 "data_size": 63488 00:15:45.004 } 00:15:45.004 ] 00:15:45.004 }' 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.004 10:44:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.263 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.263 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.263 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.263 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.263 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.522 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 [2024-11-18 10:44:11.162290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.523 "name": "Existed_Raid", 00:15:45.523 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:45.523 "strip_size_kb": 64, 00:15:45.523 "state": "configuring", 00:15:45.523 "raid_level": "raid5f", 00:15:45.523 "superblock": true, 00:15:45.523 "num_base_bdevs": 4, 00:15:45.523 "num_base_bdevs_discovered": 3, 00:15:45.523 "num_base_bdevs_operational": 4, 00:15:45.523 "base_bdevs_list": [ 00:15:45.523 { 00:15:45.523 "name": "BaseBdev1", 00:15:45.523 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:45.523 "is_configured": true, 00:15:45.523 "data_offset": 2048, 00:15:45.523 "data_size": 63488 00:15:45.523 }, 00:15:45.523 { 00:15:45.523 "name": null, 00:15:45.523 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:45.523 "is_configured": false, 00:15:45.523 "data_offset": 0, 00:15:45.523 "data_size": 63488 00:15:45.523 }, 00:15:45.523 { 00:15:45.523 "name": "BaseBdev3", 00:15:45.523 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:45.523 "is_configured": true, 00:15:45.523 "data_offset": 2048, 00:15:45.523 "data_size": 63488 00:15:45.523 }, 00:15:45.523 { 00:15:45.523 "name": "BaseBdev4", 00:15:45.523 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:45.523 "is_configured": true, 00:15:45.523 "data_offset": 2048, 00:15:45.523 "data_size": 63488 00:15:45.523 } 00:15:45.523 ] 00:15:45.523 }' 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.523 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.782 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.782 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.782 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.782 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.782 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.042 [2024-11-18 10:44:11.689387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.042 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.043 "name": "Existed_Raid", 00:15:46.043 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:46.043 "strip_size_kb": 64, 00:15:46.043 "state": "configuring", 00:15:46.043 "raid_level": "raid5f", 00:15:46.043 "superblock": true, 00:15:46.043 "num_base_bdevs": 4, 00:15:46.043 "num_base_bdevs_discovered": 2, 00:15:46.043 "num_base_bdevs_operational": 4, 00:15:46.043 "base_bdevs_list": [ 00:15:46.043 { 00:15:46.043 "name": null, 00:15:46.043 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:46.043 "is_configured": false, 00:15:46.043 "data_offset": 0, 00:15:46.043 "data_size": 63488 00:15:46.043 }, 00:15:46.043 { 00:15:46.043 "name": null, 00:15:46.043 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:46.043 "is_configured": false, 00:15:46.043 "data_offset": 0, 00:15:46.043 "data_size": 63488 00:15:46.043 }, 00:15:46.043 { 00:15:46.043 "name": "BaseBdev3", 00:15:46.043 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:46.043 "is_configured": true, 00:15:46.043 "data_offset": 2048, 00:15:46.043 "data_size": 63488 00:15:46.043 }, 00:15:46.043 { 00:15:46.043 "name": "BaseBdev4", 00:15:46.043 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:46.043 "is_configured": true, 00:15:46.043 "data_offset": 2048, 00:15:46.043 "data_size": 63488 00:15:46.043 } 00:15:46.043 ] 00:15:46.043 }' 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.043 10:44:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.613 [2024-11-18 10:44:12.319359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.613 "name": "Existed_Raid", 00:15:46.613 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:46.613 "strip_size_kb": 64, 00:15:46.613 "state": "configuring", 00:15:46.613 "raid_level": "raid5f", 00:15:46.613 "superblock": true, 00:15:46.613 "num_base_bdevs": 4, 00:15:46.613 "num_base_bdevs_discovered": 3, 00:15:46.613 "num_base_bdevs_operational": 4, 00:15:46.613 "base_bdevs_list": [ 00:15:46.613 { 00:15:46.613 "name": null, 00:15:46.613 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:46.613 "is_configured": false, 00:15:46.613 "data_offset": 0, 00:15:46.613 "data_size": 63488 00:15:46.613 }, 00:15:46.613 { 00:15:46.613 "name": "BaseBdev2", 00:15:46.613 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:46.613 "is_configured": true, 00:15:46.613 "data_offset": 2048, 00:15:46.613 "data_size": 63488 00:15:46.613 }, 00:15:46.613 { 00:15:46.613 "name": "BaseBdev3", 00:15:46.613 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:46.613 "is_configured": true, 00:15:46.613 "data_offset": 2048, 00:15:46.613 "data_size": 63488 00:15:46.613 }, 00:15:46.613 { 00:15:46.613 "name": "BaseBdev4", 00:15:46.613 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:46.613 "is_configured": true, 00:15:46.613 "data_offset": 2048, 00:15:46.613 "data_size": 63488 00:15:46.613 } 00:15:46.613 ] 00:15:46.613 }' 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.613 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.183 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.183 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65f3f973-a4d5-4e1b-8663-637c1146e7f4 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 [2024-11-18 10:44:12.912275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:47.184 [2024-11-18 10:44:12.912501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:47.184 [2024-11-18 10:44:12.912513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.184 [2024-11-18 10:44:12.912748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:47.184 NewBaseBdev 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 [2024-11-18 10:44:12.918935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:47.184 [2024-11-18 10:44:12.919024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:47.184 [2024-11-18 10:44:12.919223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 [ 00:15:47.184 { 00:15:47.184 "name": "NewBaseBdev", 00:15:47.184 "aliases": [ 00:15:47.184 "65f3f973-a4d5-4e1b-8663-637c1146e7f4" 00:15:47.184 ], 00:15:47.184 "product_name": "Malloc disk", 00:15:47.184 "block_size": 512, 00:15:47.184 "num_blocks": 65536, 00:15:47.184 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:47.184 "assigned_rate_limits": { 00:15:47.184 "rw_ios_per_sec": 0, 00:15:47.184 "rw_mbytes_per_sec": 0, 00:15:47.184 "r_mbytes_per_sec": 0, 00:15:47.184 "w_mbytes_per_sec": 0 00:15:47.184 }, 00:15:47.184 "claimed": true, 00:15:47.184 "claim_type": "exclusive_write", 00:15:47.184 "zoned": false, 00:15:47.184 "supported_io_types": { 00:15:47.184 "read": true, 00:15:47.184 "write": true, 00:15:47.184 "unmap": true, 00:15:47.184 "flush": true, 00:15:47.184 "reset": true, 00:15:47.184 "nvme_admin": false, 00:15:47.184 "nvme_io": false, 00:15:47.184 "nvme_io_md": false, 00:15:47.184 "write_zeroes": true, 00:15:47.184 "zcopy": true, 00:15:47.184 "get_zone_info": false, 00:15:47.184 "zone_management": false, 00:15:47.184 "zone_append": false, 00:15:47.184 "compare": false, 00:15:47.184 "compare_and_write": false, 00:15:47.184 "abort": true, 00:15:47.184 "seek_hole": false, 00:15:47.184 "seek_data": false, 00:15:47.184 "copy": true, 00:15:47.184 "nvme_iov_md": false 00:15:47.184 }, 00:15:47.184 "memory_domains": [ 00:15:47.184 { 00:15:47.184 "dma_device_id": "system", 00:15:47.184 "dma_device_type": 1 00:15:47.184 }, 00:15:47.184 { 00:15:47.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.184 "dma_device_type": 2 00:15:47.184 } 00:15:47.184 ], 00:15:47.184 "driver_specific": {} 00:15:47.184 } 00:15:47.184 ] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.184 10:44:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.184 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.184 "name": "Existed_Raid", 00:15:47.184 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:47.184 "strip_size_kb": 64, 00:15:47.184 "state": "online", 00:15:47.184 "raid_level": "raid5f", 00:15:47.184 "superblock": true, 00:15:47.184 "num_base_bdevs": 4, 00:15:47.184 "num_base_bdevs_discovered": 4, 00:15:47.184 "num_base_bdevs_operational": 4, 00:15:47.184 "base_bdevs_list": [ 00:15:47.184 { 00:15:47.184 "name": "NewBaseBdev", 00:15:47.184 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:47.184 "is_configured": true, 00:15:47.184 "data_offset": 2048, 00:15:47.184 "data_size": 63488 00:15:47.184 }, 00:15:47.184 { 00:15:47.184 "name": "BaseBdev2", 00:15:47.184 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:47.184 "is_configured": true, 00:15:47.184 "data_offset": 2048, 00:15:47.184 "data_size": 63488 00:15:47.184 }, 00:15:47.184 { 00:15:47.184 "name": "BaseBdev3", 00:15:47.184 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:47.184 "is_configured": true, 00:15:47.184 "data_offset": 2048, 00:15:47.184 "data_size": 63488 00:15:47.184 }, 00:15:47.184 { 00:15:47.184 "name": "BaseBdev4", 00:15:47.184 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:47.184 "is_configured": true, 00:15:47.184 "data_offset": 2048, 00:15:47.184 "data_size": 63488 00:15:47.184 } 00:15:47.184 ] 00:15:47.184 }' 00:15:47.184 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.184 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.755 [2024-11-18 10:44:13.422724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.755 "name": "Existed_Raid", 00:15:47.755 "aliases": [ 00:15:47.755 "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f" 00:15:47.755 ], 00:15:47.755 "product_name": "Raid Volume", 00:15:47.755 "block_size": 512, 00:15:47.755 "num_blocks": 190464, 00:15:47.755 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:47.755 "assigned_rate_limits": { 00:15:47.755 "rw_ios_per_sec": 0, 00:15:47.755 "rw_mbytes_per_sec": 0, 00:15:47.755 "r_mbytes_per_sec": 0, 00:15:47.755 "w_mbytes_per_sec": 0 00:15:47.755 }, 00:15:47.755 "claimed": false, 00:15:47.755 "zoned": false, 00:15:47.755 "supported_io_types": { 00:15:47.755 "read": true, 00:15:47.755 "write": true, 00:15:47.755 "unmap": false, 00:15:47.755 "flush": false, 00:15:47.755 "reset": true, 00:15:47.755 "nvme_admin": false, 00:15:47.755 "nvme_io": false, 00:15:47.755 "nvme_io_md": false, 00:15:47.755 "write_zeroes": true, 00:15:47.755 "zcopy": false, 00:15:47.755 "get_zone_info": false, 00:15:47.755 "zone_management": false, 00:15:47.755 "zone_append": false, 00:15:47.755 "compare": false, 00:15:47.755 "compare_and_write": false, 00:15:47.755 "abort": false, 00:15:47.755 "seek_hole": false, 00:15:47.755 "seek_data": false, 00:15:47.755 "copy": false, 00:15:47.755 "nvme_iov_md": false 00:15:47.755 }, 00:15:47.755 "driver_specific": { 00:15:47.755 "raid": { 00:15:47.755 "uuid": "75c1ebeb-9b5e-4e37-ad64-4d5afae60f3f", 00:15:47.755 "strip_size_kb": 64, 00:15:47.755 "state": "online", 00:15:47.755 "raid_level": "raid5f", 00:15:47.755 "superblock": true, 00:15:47.755 "num_base_bdevs": 4, 00:15:47.755 "num_base_bdevs_discovered": 4, 00:15:47.755 "num_base_bdevs_operational": 4, 00:15:47.755 "base_bdevs_list": [ 00:15:47.755 { 00:15:47.755 "name": "NewBaseBdev", 00:15:47.755 "uuid": "65f3f973-a4d5-4e1b-8663-637c1146e7f4", 00:15:47.755 "is_configured": true, 00:15:47.755 "data_offset": 2048, 00:15:47.755 "data_size": 63488 00:15:47.755 }, 00:15:47.755 { 00:15:47.755 "name": "BaseBdev2", 00:15:47.755 "uuid": "1f6e4a8b-a877-4499-ae34-af8302aa74f5", 00:15:47.755 "is_configured": true, 00:15:47.755 "data_offset": 2048, 00:15:47.755 "data_size": 63488 00:15:47.755 }, 00:15:47.755 { 00:15:47.755 "name": "BaseBdev3", 00:15:47.755 "uuid": "cecd1061-e8b5-48a8-92ae-e228279c4eee", 00:15:47.755 "is_configured": true, 00:15:47.755 "data_offset": 2048, 00:15:47.755 "data_size": 63488 00:15:47.755 }, 00:15:47.755 { 00:15:47.755 "name": "BaseBdev4", 00:15:47.755 "uuid": "282fccfb-1154-4414-b8dd-9f1d03b3a0b1", 00:15:47.755 "is_configured": true, 00:15:47.755 "data_offset": 2048, 00:15:47.755 "data_size": 63488 00:15:47.755 } 00:15:47.755 ] 00:15:47.755 } 00:15:47.755 } 00:15:47.755 }' 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.755 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:47.755 BaseBdev2 00:15:47.756 BaseBdev3 00:15:47.756 BaseBdev4' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.756 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.016 [2024-11-18 10:44:13.702076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.016 [2024-11-18 10:44:13.702101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.016 [2024-11-18 10:44:13.702161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.016 [2024-11-18 10:44:13.702440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.016 [2024-11-18 10:44:13.702456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83228 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83228 ']' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83228 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83228 00:15:48.016 killing process with pid 83228 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83228' 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83228 00:15:48.016 [2024-11-18 10:44:13.749484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.016 10:44:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83228 00:15:48.288 [2024-11-18 10:44:14.118291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.677 10:44:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:49.677 00:15:49.677 real 0m11.656s 00:15:49.677 user 0m18.580s 00:15:49.677 sys 0m2.216s 00:15:49.677 10:44:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.677 ************************************ 00:15:49.677 END TEST raid5f_state_function_test_sb 00:15:49.677 ************************************ 00:15:49.677 10:44:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.677 10:44:15 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:49.677 10:44:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:49.677 10:44:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.677 10:44:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.677 ************************************ 00:15:49.677 START TEST raid5f_superblock_test 00:15:49.677 ************************************ 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83909 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83909 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83909 ']' 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.677 10:44:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.677 [2024-11-18 10:44:15.349146] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:49.677 [2024-11-18 10:44:15.349403] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83909 ] 00:15:49.677 [2024-11-18 10:44:15.530557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.938 [2024-11-18 10:44:15.635426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.938 [2024-11-18 10:44:15.820640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.938 [2024-11-18 10:44:15.820765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.509 malloc1 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.509 [2024-11-18 10:44:16.209828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.509 [2024-11-18 10:44:16.209993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.509 [2024-11-18 10:44:16.210035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.509 [2024-11-18 10:44:16.210064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.509 [2024-11-18 10:44:16.212142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.509 [2024-11-18 10:44:16.212236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.509 pt1 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.509 malloc2 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.509 [2024-11-18 10:44:16.267266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.509 [2024-11-18 10:44:16.267318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.509 [2024-11-18 10:44:16.267337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.509 [2024-11-18 10:44:16.267345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.509 [2024-11-18 10:44:16.269247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.509 [2024-11-18 10:44:16.269279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.509 pt2 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.509 malloc3 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.509 [2024-11-18 10:44:16.355514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.509 [2024-11-18 10:44:16.355639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.509 [2024-11-18 10:44:16.355676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:50.509 [2024-11-18 10:44:16.355706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.509 [2024-11-18 10:44:16.357658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.509 [2024-11-18 10:44:16.357734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.509 pt3 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:50.509 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.510 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.770 malloc4 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.770 [2024-11-18 10:44:16.414732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:50.770 [2024-11-18 10:44:16.414831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.770 [2024-11-18 10:44:16.414864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:50.770 [2024-11-18 10:44:16.414891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.770 [2024-11-18 10:44:16.416814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.770 [2024-11-18 10:44:16.416884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:50.770 pt4 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.770 [2024-11-18 10:44:16.426753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.770 [2024-11-18 10:44:16.428443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.770 [2024-11-18 10:44:16.428541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.770 [2024-11-18 10:44:16.428618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:50.770 [2024-11-18 10:44:16.428833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:50.770 [2024-11-18 10:44:16.428852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:50.770 [2024-11-18 10:44:16.429066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:50.770 [2024-11-18 10:44:16.436225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:50.770 [2024-11-18 10:44:16.436248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:50.770 [2024-11-18 10:44:16.436416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.770 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.770 "name": "raid_bdev1", 00:15:50.770 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:50.770 "strip_size_kb": 64, 00:15:50.770 "state": "online", 00:15:50.770 "raid_level": "raid5f", 00:15:50.770 "superblock": true, 00:15:50.770 "num_base_bdevs": 4, 00:15:50.770 "num_base_bdevs_discovered": 4, 00:15:50.770 "num_base_bdevs_operational": 4, 00:15:50.770 "base_bdevs_list": [ 00:15:50.770 { 00:15:50.770 "name": "pt1", 00:15:50.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.771 "is_configured": true, 00:15:50.771 "data_offset": 2048, 00:15:50.771 "data_size": 63488 00:15:50.771 }, 00:15:50.771 { 00:15:50.771 "name": "pt2", 00:15:50.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.771 "is_configured": true, 00:15:50.771 "data_offset": 2048, 00:15:50.771 "data_size": 63488 00:15:50.771 }, 00:15:50.771 { 00:15:50.771 "name": "pt3", 00:15:50.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.771 "is_configured": true, 00:15:50.771 "data_offset": 2048, 00:15:50.771 "data_size": 63488 00:15:50.771 }, 00:15:50.771 { 00:15:50.771 "name": "pt4", 00:15:50.771 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.771 "is_configured": true, 00:15:50.771 "data_offset": 2048, 00:15:50.771 "data_size": 63488 00:15:50.771 } 00:15:50.771 ] 00:15:50.771 }' 00:15:50.771 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.771 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.030 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.290 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.290 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.290 [2024-11-18 10:44:16.919786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.290 10:44:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.290 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.290 "name": "raid_bdev1", 00:15:51.290 "aliases": [ 00:15:51.290 "904ce3e9-24bb-49d6-bd0f-c251f0771c64" 00:15:51.290 ], 00:15:51.290 "product_name": "Raid Volume", 00:15:51.290 "block_size": 512, 00:15:51.290 "num_blocks": 190464, 00:15:51.290 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:51.290 "assigned_rate_limits": { 00:15:51.290 "rw_ios_per_sec": 0, 00:15:51.290 "rw_mbytes_per_sec": 0, 00:15:51.290 "r_mbytes_per_sec": 0, 00:15:51.290 "w_mbytes_per_sec": 0 00:15:51.290 }, 00:15:51.290 "claimed": false, 00:15:51.290 "zoned": false, 00:15:51.290 "supported_io_types": { 00:15:51.290 "read": true, 00:15:51.290 "write": true, 00:15:51.290 "unmap": false, 00:15:51.290 "flush": false, 00:15:51.290 "reset": true, 00:15:51.290 "nvme_admin": false, 00:15:51.290 "nvme_io": false, 00:15:51.290 "nvme_io_md": false, 00:15:51.290 "write_zeroes": true, 00:15:51.290 "zcopy": false, 00:15:51.290 "get_zone_info": false, 00:15:51.290 "zone_management": false, 00:15:51.290 "zone_append": false, 00:15:51.290 "compare": false, 00:15:51.290 "compare_and_write": false, 00:15:51.290 "abort": false, 00:15:51.290 "seek_hole": false, 00:15:51.290 "seek_data": false, 00:15:51.290 "copy": false, 00:15:51.290 "nvme_iov_md": false 00:15:51.290 }, 00:15:51.290 "driver_specific": { 00:15:51.290 "raid": { 00:15:51.290 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:51.290 "strip_size_kb": 64, 00:15:51.290 "state": "online", 00:15:51.290 "raid_level": "raid5f", 00:15:51.290 "superblock": true, 00:15:51.290 "num_base_bdevs": 4, 00:15:51.290 "num_base_bdevs_discovered": 4, 00:15:51.290 "num_base_bdevs_operational": 4, 00:15:51.290 "base_bdevs_list": [ 00:15:51.290 { 00:15:51.290 "name": "pt1", 00:15:51.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.290 "is_configured": true, 00:15:51.290 "data_offset": 2048, 00:15:51.290 "data_size": 63488 00:15:51.290 }, 00:15:51.290 { 00:15:51.290 "name": "pt2", 00:15:51.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.290 "is_configured": true, 00:15:51.290 "data_offset": 2048, 00:15:51.290 "data_size": 63488 00:15:51.290 }, 00:15:51.290 { 00:15:51.290 "name": "pt3", 00:15:51.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.290 "is_configured": true, 00:15:51.290 "data_offset": 2048, 00:15:51.290 "data_size": 63488 00:15:51.290 }, 00:15:51.290 { 00:15:51.290 "name": "pt4", 00:15:51.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.290 "is_configured": true, 00:15:51.290 "data_offset": 2048, 00:15:51.290 "data_size": 63488 00:15:51.290 } 00:15:51.290 ] 00:15:51.290 } 00:15:51.290 } 00:15:51.290 }' 00:15:51.290 10:44:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:51.290 pt2 00:15:51.290 pt3 00:15:51.290 pt4' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.290 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.550 [2024-11-18 10:44:17.239399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=904ce3e9-24bb-49d6-bd0f-c251f0771c64 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 904ce3e9-24bb-49d6-bd0f-c251f0771c64 ']' 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.550 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.550 [2024-11-18 10:44:17.287152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.550 [2024-11-18 10:44:17.287241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.551 [2024-11-18 10:44:17.287320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.551 [2024-11-18 10:44:17.287421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.551 [2024-11-18 10:44:17.287477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.551 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.811 [2024-11-18 10:44:17.450880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:51.811 [2024-11-18 10:44:17.452611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:51.811 [2024-11-18 10:44:17.452689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:51.811 [2024-11-18 10:44:17.452743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:51.811 [2024-11-18 10:44:17.452818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:51.811 [2024-11-18 10:44:17.452900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:51.811 [2024-11-18 10:44:17.452970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:51.811 [2024-11-18 10:44:17.453021] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:51.811 [2024-11-18 10:44:17.453067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.811 [2024-11-18 10:44:17.453100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:51.811 request: 00:15:51.811 { 00:15:51.811 "name": "raid_bdev1", 00:15:51.811 "raid_level": "raid5f", 00:15:51.811 "base_bdevs": [ 00:15:51.811 "malloc1", 00:15:51.811 "malloc2", 00:15:51.811 "malloc3", 00:15:51.811 "malloc4" 00:15:51.811 ], 00:15:51.811 "strip_size_kb": 64, 00:15:51.811 "superblock": false, 00:15:51.811 "method": "bdev_raid_create", 00:15:51.811 "req_id": 1 00:15:51.811 } 00:15:51.811 Got JSON-RPC error response 00:15:51.811 response: 00:15:51.811 { 00:15:51.811 "code": -17, 00:15:51.811 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:51.811 } 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:51.811 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.812 [2024-11-18 10:44:17.510760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.812 [2024-11-18 10:44:17.510808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.812 [2024-11-18 10:44:17.510823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:51.812 [2024-11-18 10:44:17.510833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.812 [2024-11-18 10:44:17.512879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.812 [2024-11-18 10:44:17.512919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.812 [2024-11-18 10:44:17.512978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.812 [2024-11-18 10:44:17.513036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.812 pt1 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.812 "name": "raid_bdev1", 00:15:51.812 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:51.812 "strip_size_kb": 64, 00:15:51.812 "state": "configuring", 00:15:51.812 "raid_level": "raid5f", 00:15:51.812 "superblock": true, 00:15:51.812 "num_base_bdevs": 4, 00:15:51.812 "num_base_bdevs_discovered": 1, 00:15:51.812 "num_base_bdevs_operational": 4, 00:15:51.812 "base_bdevs_list": [ 00:15:51.812 { 00:15:51.812 "name": "pt1", 00:15:51.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.812 "is_configured": true, 00:15:51.812 "data_offset": 2048, 00:15:51.812 "data_size": 63488 00:15:51.812 }, 00:15:51.812 { 00:15:51.812 "name": null, 00:15:51.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.812 "is_configured": false, 00:15:51.812 "data_offset": 2048, 00:15:51.812 "data_size": 63488 00:15:51.812 }, 00:15:51.812 { 00:15:51.812 "name": null, 00:15:51.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.812 "is_configured": false, 00:15:51.812 "data_offset": 2048, 00:15:51.812 "data_size": 63488 00:15:51.812 }, 00:15:51.812 { 00:15:51.812 "name": null, 00:15:51.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.812 "is_configured": false, 00:15:51.812 "data_offset": 2048, 00:15:51.812 "data_size": 63488 00:15:51.812 } 00:15:51.812 ] 00:15:51.812 }' 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.812 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.381 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:52.381 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 [2024-11-18 10:44:17.965975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.382 [2024-11-18 10:44:17.966071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.382 [2024-11-18 10:44:17.966100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:52.382 [2024-11-18 10:44:17.966128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.382 [2024-11-18 10:44:17.966464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.382 [2024-11-18 10:44:17.966521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.382 [2024-11-18 10:44:17.966600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.382 [2024-11-18 10:44:17.966647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.382 pt2 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 [2024-11-18 10:44:17.977969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 10:44:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.382 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.382 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.382 "name": "raid_bdev1", 00:15:52.382 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:52.382 "strip_size_kb": 64, 00:15:52.382 "state": "configuring", 00:15:52.382 "raid_level": "raid5f", 00:15:52.382 "superblock": true, 00:15:52.382 "num_base_bdevs": 4, 00:15:52.382 "num_base_bdevs_discovered": 1, 00:15:52.382 "num_base_bdevs_operational": 4, 00:15:52.382 "base_bdevs_list": [ 00:15:52.382 { 00:15:52.382 "name": "pt1", 00:15:52.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.382 "is_configured": true, 00:15:52.382 "data_offset": 2048, 00:15:52.382 "data_size": 63488 00:15:52.382 }, 00:15:52.382 { 00:15:52.382 "name": null, 00:15:52.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.382 "is_configured": false, 00:15:52.382 "data_offset": 0, 00:15:52.382 "data_size": 63488 00:15:52.382 }, 00:15:52.382 { 00:15:52.382 "name": null, 00:15:52.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.382 "is_configured": false, 00:15:52.382 "data_offset": 2048, 00:15:52.382 "data_size": 63488 00:15:52.382 }, 00:15:52.382 { 00:15:52.382 "name": null, 00:15:52.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.382 "is_configured": false, 00:15:52.382 "data_offset": 2048, 00:15:52.382 "data_size": 63488 00:15:52.382 } 00:15:52.382 ] 00:15:52.382 }' 00:15:52.382 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.382 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.644 [2024-11-18 10:44:18.457153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.644 [2024-11-18 10:44:18.457223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.644 [2024-11-18 10:44:18.457240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:52.644 [2024-11-18 10:44:18.457248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.644 [2024-11-18 10:44:18.457610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.644 [2024-11-18 10:44:18.457626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.644 [2024-11-18 10:44:18.457694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.644 [2024-11-18 10:44:18.457712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.644 pt2 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.644 [2024-11-18 10:44:18.469117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.644 [2024-11-18 10:44:18.469162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.644 [2024-11-18 10:44:18.469186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:52.644 [2024-11-18 10:44:18.469194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.644 [2024-11-18 10:44:18.469514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.644 [2024-11-18 10:44:18.469531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.644 [2024-11-18 10:44:18.469583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:52.644 [2024-11-18 10:44:18.469598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.644 pt3 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.644 [2024-11-18 10:44:18.481079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:52.644 [2024-11-18 10:44:18.481125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.644 [2024-11-18 10:44:18.481141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:52.644 [2024-11-18 10:44:18.481148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.644 [2024-11-18 10:44:18.481501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.644 [2024-11-18 10:44:18.481525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:52.644 [2024-11-18 10:44:18.481577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:52.644 [2024-11-18 10:44:18.481592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:52.644 [2024-11-18 10:44:18.481710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:52.644 [2024-11-18 10:44:18.481718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:52.644 [2024-11-18 10:44:18.481923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:52.644 [2024-11-18 10:44:18.488681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:52.644 [2024-11-18 10:44:18.488705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:52.644 [2024-11-18 10:44:18.488863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.644 pt4 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.644 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.913 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.913 "name": "raid_bdev1", 00:15:52.913 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:52.913 "strip_size_kb": 64, 00:15:52.913 "state": "online", 00:15:52.913 "raid_level": "raid5f", 00:15:52.913 "superblock": true, 00:15:52.913 "num_base_bdevs": 4, 00:15:52.913 "num_base_bdevs_discovered": 4, 00:15:52.913 "num_base_bdevs_operational": 4, 00:15:52.913 "base_bdevs_list": [ 00:15:52.913 { 00:15:52.913 "name": "pt1", 00:15:52.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.913 "is_configured": true, 00:15:52.913 "data_offset": 2048, 00:15:52.913 "data_size": 63488 00:15:52.913 }, 00:15:52.913 { 00:15:52.913 "name": "pt2", 00:15:52.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.913 "is_configured": true, 00:15:52.913 "data_offset": 2048, 00:15:52.913 "data_size": 63488 00:15:52.913 }, 00:15:52.913 { 00:15:52.913 "name": "pt3", 00:15:52.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.913 "is_configured": true, 00:15:52.913 "data_offset": 2048, 00:15:52.913 "data_size": 63488 00:15:52.913 }, 00:15:52.913 { 00:15:52.913 "name": "pt4", 00:15:52.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.913 "is_configured": true, 00:15:52.913 "data_offset": 2048, 00:15:52.913 "data_size": 63488 00:15:52.913 } 00:15:52.913 ] 00:15:52.913 }' 00:15:52.913 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.913 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.189 [2024-11-18 10:44:18.940040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.189 "name": "raid_bdev1", 00:15:53.189 "aliases": [ 00:15:53.189 "904ce3e9-24bb-49d6-bd0f-c251f0771c64" 00:15:53.189 ], 00:15:53.189 "product_name": "Raid Volume", 00:15:53.189 "block_size": 512, 00:15:53.189 "num_blocks": 190464, 00:15:53.189 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:53.189 "assigned_rate_limits": { 00:15:53.189 "rw_ios_per_sec": 0, 00:15:53.189 "rw_mbytes_per_sec": 0, 00:15:53.189 "r_mbytes_per_sec": 0, 00:15:53.189 "w_mbytes_per_sec": 0 00:15:53.189 }, 00:15:53.189 "claimed": false, 00:15:53.189 "zoned": false, 00:15:53.189 "supported_io_types": { 00:15:53.189 "read": true, 00:15:53.189 "write": true, 00:15:53.189 "unmap": false, 00:15:53.189 "flush": false, 00:15:53.189 "reset": true, 00:15:53.189 "nvme_admin": false, 00:15:53.189 "nvme_io": false, 00:15:53.189 "nvme_io_md": false, 00:15:53.189 "write_zeroes": true, 00:15:53.189 "zcopy": false, 00:15:53.189 "get_zone_info": false, 00:15:53.189 "zone_management": false, 00:15:53.189 "zone_append": false, 00:15:53.189 "compare": false, 00:15:53.189 "compare_and_write": false, 00:15:53.189 "abort": false, 00:15:53.189 "seek_hole": false, 00:15:53.189 "seek_data": false, 00:15:53.189 "copy": false, 00:15:53.189 "nvme_iov_md": false 00:15:53.189 }, 00:15:53.189 "driver_specific": { 00:15:53.189 "raid": { 00:15:53.189 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:53.189 "strip_size_kb": 64, 00:15:53.189 "state": "online", 00:15:53.189 "raid_level": "raid5f", 00:15:53.189 "superblock": true, 00:15:53.189 "num_base_bdevs": 4, 00:15:53.189 "num_base_bdevs_discovered": 4, 00:15:53.189 "num_base_bdevs_operational": 4, 00:15:53.189 "base_bdevs_list": [ 00:15:53.189 { 00:15:53.189 "name": "pt1", 00:15:53.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.189 "is_configured": true, 00:15:53.189 "data_offset": 2048, 00:15:53.189 "data_size": 63488 00:15:53.189 }, 00:15:53.189 { 00:15:53.189 "name": "pt2", 00:15:53.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.189 "is_configured": true, 00:15:53.189 "data_offset": 2048, 00:15:53.189 "data_size": 63488 00:15:53.189 }, 00:15:53.189 { 00:15:53.189 "name": "pt3", 00:15:53.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.189 "is_configured": true, 00:15:53.189 "data_offset": 2048, 00:15:53.189 "data_size": 63488 00:15:53.189 }, 00:15:53.189 { 00:15:53.189 "name": "pt4", 00:15:53.189 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.189 "is_configured": true, 00:15:53.189 "data_offset": 2048, 00:15:53.189 "data_size": 63488 00:15:53.189 } 00:15:53.189 ] 00:15:53.189 } 00:15:53.189 } 00:15:53.189 }' 00:15:53.189 10:44:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:53.189 pt2 00:15:53.189 pt3 00:15:53.189 pt4' 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.189 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 [2024-11-18 10:44:19.267438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 904ce3e9-24bb-49d6-bd0f-c251f0771c64 '!=' 904ce3e9-24bb-49d6-bd0f-c251f0771c64 ']' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 [2024-11-18 10:44:19.311266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.708 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.708 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.708 "name": "raid_bdev1", 00:15:53.708 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:53.708 "strip_size_kb": 64, 00:15:53.708 "state": "online", 00:15:53.708 "raid_level": "raid5f", 00:15:53.708 "superblock": true, 00:15:53.708 "num_base_bdevs": 4, 00:15:53.708 "num_base_bdevs_discovered": 3, 00:15:53.708 "num_base_bdevs_operational": 3, 00:15:53.708 "base_bdevs_list": [ 00:15:53.708 { 00:15:53.708 "name": null, 00:15:53.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.708 "is_configured": false, 00:15:53.708 "data_offset": 0, 00:15:53.708 "data_size": 63488 00:15:53.708 }, 00:15:53.708 { 00:15:53.708 "name": "pt2", 00:15:53.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.708 "is_configured": true, 00:15:53.708 "data_offset": 2048, 00:15:53.708 "data_size": 63488 00:15:53.708 }, 00:15:53.708 { 00:15:53.708 "name": "pt3", 00:15:53.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.708 "is_configured": true, 00:15:53.708 "data_offset": 2048, 00:15:53.708 "data_size": 63488 00:15:53.708 }, 00:15:53.708 { 00:15:53.708 "name": "pt4", 00:15:53.708 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.708 "is_configured": true, 00:15:53.708 "data_offset": 2048, 00:15:53.708 "data_size": 63488 00:15:53.708 } 00:15:53.708 ] 00:15:53.708 }' 00:15:53.708 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.708 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.968 [2024-11-18 10:44:19.762424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.968 [2024-11-18 10:44:19.762501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.968 [2024-11-18 10:44:19.762575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.968 [2024-11-18 10:44:19.762654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.968 [2024-11-18 10:44:19.762718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.968 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.229 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.229 [2024-11-18 10:44:19.862285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.229 [2024-11-18 10:44:19.862329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.230 [2024-11-18 10:44:19.862345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:54.230 [2024-11-18 10:44:19.862353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.230 [2024-11-18 10:44:19.864517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.230 [2024-11-18 10:44:19.864586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.230 [2024-11-18 10:44:19.864674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:54.230 [2024-11-18 10:44:19.864729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.230 pt2 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.230 "name": "raid_bdev1", 00:15:54.230 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:54.230 "strip_size_kb": 64, 00:15:54.230 "state": "configuring", 00:15:54.230 "raid_level": "raid5f", 00:15:54.230 "superblock": true, 00:15:54.230 "num_base_bdevs": 4, 00:15:54.230 "num_base_bdevs_discovered": 1, 00:15:54.230 "num_base_bdevs_operational": 3, 00:15:54.230 "base_bdevs_list": [ 00:15:54.230 { 00:15:54.230 "name": null, 00:15:54.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.230 "is_configured": false, 00:15:54.230 "data_offset": 2048, 00:15:54.230 "data_size": 63488 00:15:54.230 }, 00:15:54.230 { 00:15:54.230 "name": "pt2", 00:15:54.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.230 "is_configured": true, 00:15:54.230 "data_offset": 2048, 00:15:54.230 "data_size": 63488 00:15:54.230 }, 00:15:54.230 { 00:15:54.230 "name": null, 00:15:54.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.230 "is_configured": false, 00:15:54.230 "data_offset": 2048, 00:15:54.230 "data_size": 63488 00:15:54.230 }, 00:15:54.230 { 00:15:54.230 "name": null, 00:15:54.230 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:54.230 "is_configured": false, 00:15:54.230 "data_offset": 2048, 00:15:54.230 "data_size": 63488 00:15:54.230 } 00:15:54.230 ] 00:15:54.230 }' 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.230 10:44:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.492 [2024-11-18 10:44:20.289565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.492 [2024-11-18 10:44:20.289614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.492 [2024-11-18 10:44:20.289631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:54.492 [2024-11-18 10:44:20.289640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.492 [2024-11-18 10:44:20.289978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.492 [2024-11-18 10:44:20.289993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.492 [2024-11-18 10:44:20.290054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:54.492 [2024-11-18 10:44:20.290077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.492 pt3 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.492 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.492 "name": "raid_bdev1", 00:15:54.492 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:54.492 "strip_size_kb": 64, 00:15:54.492 "state": "configuring", 00:15:54.492 "raid_level": "raid5f", 00:15:54.492 "superblock": true, 00:15:54.492 "num_base_bdevs": 4, 00:15:54.492 "num_base_bdevs_discovered": 2, 00:15:54.492 "num_base_bdevs_operational": 3, 00:15:54.492 "base_bdevs_list": [ 00:15:54.492 { 00:15:54.492 "name": null, 00:15:54.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.492 "is_configured": false, 00:15:54.492 "data_offset": 2048, 00:15:54.492 "data_size": 63488 00:15:54.492 }, 00:15:54.492 { 00:15:54.492 "name": "pt2", 00:15:54.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.492 "is_configured": true, 00:15:54.492 "data_offset": 2048, 00:15:54.492 "data_size": 63488 00:15:54.492 }, 00:15:54.492 { 00:15:54.492 "name": "pt3", 00:15:54.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.493 "is_configured": true, 00:15:54.493 "data_offset": 2048, 00:15:54.493 "data_size": 63488 00:15:54.493 }, 00:15:54.493 { 00:15:54.493 "name": null, 00:15:54.493 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:54.493 "is_configured": false, 00:15:54.493 "data_offset": 2048, 00:15:54.493 "data_size": 63488 00:15:54.493 } 00:15:54.493 ] 00:15:54.493 }' 00:15:54.493 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.493 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.065 [2024-11-18 10:44:20.736818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:55.065 [2024-11-18 10:44:20.736923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.065 [2024-11-18 10:44:20.736960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:55.065 [2024-11-18 10:44:20.736988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.065 [2024-11-18 10:44:20.737411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.065 [2024-11-18 10:44:20.737464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:55.065 [2024-11-18 10:44:20.737565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:55.065 [2024-11-18 10:44:20.737611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:55.065 [2024-11-18 10:44:20.737762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:55.065 [2024-11-18 10:44:20.737796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:55.065 [2024-11-18 10:44:20.738039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:55.065 [2024-11-18 10:44:20.744970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:55.065 [2024-11-18 10:44:20.745029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:55.065 [2024-11-18 10:44:20.745355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.065 pt4 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.065 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.065 "name": "raid_bdev1", 00:15:55.065 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:55.065 "strip_size_kb": 64, 00:15:55.065 "state": "online", 00:15:55.065 "raid_level": "raid5f", 00:15:55.065 "superblock": true, 00:15:55.065 "num_base_bdevs": 4, 00:15:55.065 "num_base_bdevs_discovered": 3, 00:15:55.065 "num_base_bdevs_operational": 3, 00:15:55.065 "base_bdevs_list": [ 00:15:55.065 { 00:15:55.065 "name": null, 00:15:55.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.065 "is_configured": false, 00:15:55.065 "data_offset": 2048, 00:15:55.065 "data_size": 63488 00:15:55.065 }, 00:15:55.065 { 00:15:55.065 "name": "pt2", 00:15:55.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.065 "is_configured": true, 00:15:55.065 "data_offset": 2048, 00:15:55.065 "data_size": 63488 00:15:55.065 }, 00:15:55.065 { 00:15:55.065 "name": "pt3", 00:15:55.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.065 "is_configured": true, 00:15:55.065 "data_offset": 2048, 00:15:55.065 "data_size": 63488 00:15:55.065 }, 00:15:55.065 { 00:15:55.065 "name": "pt4", 00:15:55.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:55.065 "is_configured": true, 00:15:55.065 "data_offset": 2048, 00:15:55.065 "data_size": 63488 00:15:55.065 } 00:15:55.065 ] 00:15:55.065 }' 00:15:55.066 10:44:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.066 10:44:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.326 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.326 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.326 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.326 [2024-11-18 10:44:21.204865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.326 [2024-11-18 10:44:21.204888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.326 [2024-11-18 10:44:21.204943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.326 [2024-11-18 10:44:21.205001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.327 [2024-11-18 10:44:21.205012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.587 [2024-11-18 10:44:21.280742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.587 [2024-11-18 10:44:21.280813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.587 [2024-11-18 10:44:21.280835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:55.587 [2024-11-18 10:44:21.280846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.587 [2024-11-18 10:44:21.282986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.587 [2024-11-18 10:44:21.283026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.587 [2024-11-18 10:44:21.283087] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.587 [2024-11-18 10:44:21.283141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.587 [2024-11-18 10:44:21.283303] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:55.587 [2024-11-18 10:44:21.283316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.587 [2024-11-18 10:44:21.283330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:55.587 [2024-11-18 10:44:21.283407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.587 [2024-11-18 10:44:21.283508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.587 pt1 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.587 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.587 "name": "raid_bdev1", 00:15:55.587 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:55.587 "strip_size_kb": 64, 00:15:55.587 "state": "configuring", 00:15:55.587 "raid_level": "raid5f", 00:15:55.587 "superblock": true, 00:15:55.587 "num_base_bdevs": 4, 00:15:55.587 "num_base_bdevs_discovered": 2, 00:15:55.587 "num_base_bdevs_operational": 3, 00:15:55.587 "base_bdevs_list": [ 00:15:55.587 { 00:15:55.587 "name": null, 00:15:55.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.587 "is_configured": false, 00:15:55.587 "data_offset": 2048, 00:15:55.587 "data_size": 63488 00:15:55.587 }, 00:15:55.587 { 00:15:55.588 "name": "pt2", 00:15:55.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.588 "is_configured": true, 00:15:55.588 "data_offset": 2048, 00:15:55.588 "data_size": 63488 00:15:55.588 }, 00:15:55.588 { 00:15:55.588 "name": "pt3", 00:15:55.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.588 "is_configured": true, 00:15:55.588 "data_offset": 2048, 00:15:55.588 "data_size": 63488 00:15:55.588 }, 00:15:55.588 { 00:15:55.588 "name": null, 00:15:55.588 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:55.588 "is_configured": false, 00:15:55.588 "data_offset": 2048, 00:15:55.588 "data_size": 63488 00:15:55.588 } 00:15:55.588 ] 00:15:55.588 }' 00:15:55.588 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.588 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.848 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.108 [2024-11-18 10:44:21.735990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:56.108 [2024-11-18 10:44:21.736089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.108 [2024-11-18 10:44:21.736127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:56.108 [2024-11-18 10:44:21.736155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.108 [2024-11-18 10:44:21.736550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.108 [2024-11-18 10:44:21.736606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:56.108 [2024-11-18 10:44:21.736698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:56.108 [2024-11-18 10:44:21.736752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:56.108 [2024-11-18 10:44:21.736932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:56.108 [2024-11-18 10:44:21.736969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:56.108 [2024-11-18 10:44:21.737238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:56.108 [2024-11-18 10:44:21.744123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:56.108 [2024-11-18 10:44:21.744208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:56.108 [2024-11-18 10:44:21.744483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.108 pt4 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.108 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.109 "name": "raid_bdev1", 00:15:56.109 "uuid": "904ce3e9-24bb-49d6-bd0f-c251f0771c64", 00:15:56.109 "strip_size_kb": 64, 00:15:56.109 "state": "online", 00:15:56.109 "raid_level": "raid5f", 00:15:56.109 "superblock": true, 00:15:56.109 "num_base_bdevs": 4, 00:15:56.109 "num_base_bdevs_discovered": 3, 00:15:56.109 "num_base_bdevs_operational": 3, 00:15:56.109 "base_bdevs_list": [ 00:15:56.109 { 00:15:56.109 "name": null, 00:15:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.109 "is_configured": false, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 }, 00:15:56.109 { 00:15:56.109 "name": "pt2", 00:15:56.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.109 "is_configured": true, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 }, 00:15:56.109 { 00:15:56.109 "name": "pt3", 00:15:56.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.109 "is_configured": true, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 }, 00:15:56.109 { 00:15:56.109 "name": "pt4", 00:15:56.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:56.109 "is_configured": true, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 } 00:15:56.109 ] 00:15:56.109 }' 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.109 10:44:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.369 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:56.369 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.369 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.369 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.369 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:56.629 [2024-11-18 10:44:22.283950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 904ce3e9-24bb-49d6-bd0f-c251f0771c64 '!=' 904ce3e9-24bb-49d6-bd0f-c251f0771c64 ']' 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83909 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83909 ']' 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83909 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83909 00:15:56.629 killing process with pid 83909 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83909' 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83909 00:15:56.629 [2024-11-18 10:44:22.368726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.629 [2024-11-18 10:44:22.368798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.629 [2024-11-18 10:44:22.368864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.629 [2024-11-18 10:44:22.368876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:56.629 10:44:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83909 00:15:56.890 [2024-11-18 10:44:22.739594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.276 10:44:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:58.276 00:15:58.276 real 0m8.534s 00:15:58.276 user 0m13.405s 00:15:58.276 sys 0m1.665s 00:15:58.276 10:44:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.276 10:44:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.276 ************************************ 00:15:58.276 END TEST raid5f_superblock_test 00:15:58.276 ************************************ 00:15:58.276 10:44:23 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:58.276 10:44:23 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:58.276 10:44:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:58.276 10:44:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.276 10:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.276 ************************************ 00:15:58.276 START TEST raid5f_rebuild_test 00:15:58.276 ************************************ 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84395 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84395 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84395 ']' 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.276 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.276 [2024-11-18 10:44:23.959626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:58.276 [2024-11-18 10:44:23.959813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.276 Zero copy mechanism will not be used. 00:15:58.276 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84395 ] 00:15:58.276 [2024-11-18 10:44:24.131194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.537 [2024-11-18 10:44:24.237815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.797 [2024-11-18 10:44:24.426098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.797 [2024-11-18 10:44:24.426204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.058 BaseBdev1_malloc 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.058 [2024-11-18 10:44:24.826534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.058 [2024-11-18 10:44:24.826646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.058 [2024-11-18 10:44:24.826690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.058 [2024-11-18 10:44:24.826722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.058 [2024-11-18 10:44:24.828726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.058 [2024-11-18 10:44:24.828798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.058 BaseBdev1 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.058 BaseBdev2_malloc 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.058 [2024-11-18 10:44:24.880971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.058 [2024-11-18 10:44:24.881064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.058 [2024-11-18 10:44:24.881087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.058 [2024-11-18 10:44:24.881099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.058 [2024-11-18 10:44:24.883034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.058 [2024-11-18 10:44:24.883073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.058 BaseBdev2 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.058 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 BaseBdev3_malloc 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 [2024-11-18 10:44:24.967860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:59.320 [2024-11-18 10:44:24.967912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.320 [2024-11-18 10:44:24.967934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:59.320 [2024-11-18 10:44:24.967945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.320 [2024-11-18 10:44:24.969905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.320 [2024-11-18 10:44:24.969947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.320 BaseBdev3 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.320 10:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 BaseBdev4_malloc 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 [2024-11-18 10:44:25.023661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:59.320 [2024-11-18 10:44:25.023713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.320 [2024-11-18 10:44:25.023732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:59.320 [2024-11-18 10:44:25.023743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.320 [2024-11-18 10:44:25.025720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.320 [2024-11-18 10:44:25.025760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:59.320 BaseBdev4 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 spare_malloc 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.320 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 spare_delay 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.321 [2024-11-18 10:44:25.091416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.321 [2024-11-18 10:44:25.091516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.321 [2024-11-18 10:44:25.091557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:59.321 [2024-11-18 10:44:25.091568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.321 [2024-11-18 10:44:25.093544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.321 [2024-11-18 10:44:25.093584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.321 spare 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.321 [2024-11-18 10:44:25.103444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.321 [2024-11-18 10:44:25.105144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.321 [2024-11-18 10:44:25.105216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.321 [2024-11-18 10:44:25.105264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.321 [2024-11-18 10:44:25.105345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.321 [2024-11-18 10:44:25.105356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:59.321 [2024-11-18 10:44:25.105578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:59.321 [2024-11-18 10:44:25.112567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.321 [2024-11-18 10:44:25.112619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.321 [2024-11-18 10:44:25.112825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.321 "name": "raid_bdev1", 00:15:59.321 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:15:59.321 "strip_size_kb": 64, 00:15:59.321 "state": "online", 00:15:59.321 "raid_level": "raid5f", 00:15:59.321 "superblock": false, 00:15:59.321 "num_base_bdevs": 4, 00:15:59.321 "num_base_bdevs_discovered": 4, 00:15:59.321 "num_base_bdevs_operational": 4, 00:15:59.321 "base_bdevs_list": [ 00:15:59.321 { 00:15:59.321 "name": "BaseBdev1", 00:15:59.321 "uuid": "e2b895ef-4ef1-5db5-b88e-3f9fa84968c5", 00:15:59.321 "is_configured": true, 00:15:59.321 "data_offset": 0, 00:15:59.321 "data_size": 65536 00:15:59.321 }, 00:15:59.321 { 00:15:59.321 "name": "BaseBdev2", 00:15:59.321 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:15:59.321 "is_configured": true, 00:15:59.321 "data_offset": 0, 00:15:59.321 "data_size": 65536 00:15:59.321 }, 00:15:59.321 { 00:15:59.321 "name": "BaseBdev3", 00:15:59.321 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:15:59.321 "is_configured": true, 00:15:59.321 "data_offset": 0, 00:15:59.321 "data_size": 65536 00:15:59.321 }, 00:15:59.321 { 00:15:59.321 "name": "BaseBdev4", 00:15:59.321 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:15:59.321 "is_configured": true, 00:15:59.321 "data_offset": 0, 00:15:59.321 "data_size": 65536 00:15:59.321 } 00:15:59.321 ] 00:15:59.321 }' 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.321 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.893 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.893 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.893 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.894 [2024-11-18 10:44:25.595971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.894 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:00.155 [2024-11-18 10:44:25.847382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:00.155 /dev/nbd0 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.155 1+0 records in 00:16:00.155 1+0 records out 00:16:00.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357279 s, 11.5 MB/s 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:00.155 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:00.728 512+0 records in 00:16:00.728 512+0 records out 00:16:00.728 100663296 bytes (101 MB, 96 MiB) copied, 0.585033 s, 172 MB/s 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.728 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.990 [2024-11-18 10:44:26.733311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.990 [2024-11-18 10:44:26.765885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.990 "name": "raid_bdev1", 00:16:00.990 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:00.990 "strip_size_kb": 64, 00:16:00.990 "state": "online", 00:16:00.990 "raid_level": "raid5f", 00:16:00.990 "superblock": false, 00:16:00.990 "num_base_bdevs": 4, 00:16:00.990 "num_base_bdevs_discovered": 3, 00:16:00.990 "num_base_bdevs_operational": 3, 00:16:00.990 "base_bdevs_list": [ 00:16:00.990 { 00:16:00.990 "name": null, 00:16:00.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.990 "is_configured": false, 00:16:00.990 "data_offset": 0, 00:16:00.990 "data_size": 65536 00:16:00.990 }, 00:16:00.990 { 00:16:00.990 "name": "BaseBdev2", 00:16:00.990 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:00.990 "is_configured": true, 00:16:00.990 "data_offset": 0, 00:16:00.990 "data_size": 65536 00:16:00.990 }, 00:16:00.990 { 00:16:00.990 "name": "BaseBdev3", 00:16:00.990 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:00.990 "is_configured": true, 00:16:00.990 "data_offset": 0, 00:16:00.990 "data_size": 65536 00:16:00.990 }, 00:16:00.990 { 00:16:00.990 "name": "BaseBdev4", 00:16:00.990 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:00.990 "is_configured": true, 00:16:00.990 "data_offset": 0, 00:16:00.990 "data_size": 65536 00:16:00.990 } 00:16:00.990 ] 00:16:00.990 }' 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.990 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.562 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.562 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 [2024-11-18 10:44:27.221091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.562 [2024-11-18 10:44:27.236157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:01.562 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.562 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:01.562 [2024-11-18 10:44:27.244881] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.503 "name": "raid_bdev1", 00:16:02.503 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:02.503 "strip_size_kb": 64, 00:16:02.503 "state": "online", 00:16:02.503 "raid_level": "raid5f", 00:16:02.503 "superblock": false, 00:16:02.503 "num_base_bdevs": 4, 00:16:02.503 "num_base_bdevs_discovered": 4, 00:16:02.503 "num_base_bdevs_operational": 4, 00:16:02.503 "process": { 00:16:02.503 "type": "rebuild", 00:16:02.503 "target": "spare", 00:16:02.503 "progress": { 00:16:02.503 "blocks": 19200, 00:16:02.503 "percent": 9 00:16:02.503 } 00:16:02.503 }, 00:16:02.503 "base_bdevs_list": [ 00:16:02.503 { 00:16:02.503 "name": "spare", 00:16:02.503 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:02.503 "is_configured": true, 00:16:02.503 "data_offset": 0, 00:16:02.503 "data_size": 65536 00:16:02.503 }, 00:16:02.503 { 00:16:02.503 "name": "BaseBdev2", 00:16:02.503 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:02.503 "is_configured": true, 00:16:02.503 "data_offset": 0, 00:16:02.503 "data_size": 65536 00:16:02.503 }, 00:16:02.503 { 00:16:02.503 "name": "BaseBdev3", 00:16:02.503 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:02.503 "is_configured": true, 00:16:02.503 "data_offset": 0, 00:16:02.503 "data_size": 65536 00:16:02.503 }, 00:16:02.503 { 00:16:02.503 "name": "BaseBdev4", 00:16:02.503 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:02.503 "is_configured": true, 00:16:02.503 "data_offset": 0, 00:16:02.503 "data_size": 65536 00:16:02.503 } 00:16:02.503 ] 00:16:02.503 }' 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.503 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.764 [2024-11-18 10:44:28.399534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.764 [2024-11-18 10:44:28.450358] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.764 [2024-11-18 10:44:28.450421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.764 [2024-11-18 10:44:28.450438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.764 [2024-11-18 10:44:28.450447] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.764 "name": "raid_bdev1", 00:16:02.764 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:02.764 "strip_size_kb": 64, 00:16:02.764 "state": "online", 00:16:02.764 "raid_level": "raid5f", 00:16:02.764 "superblock": false, 00:16:02.764 "num_base_bdevs": 4, 00:16:02.764 "num_base_bdevs_discovered": 3, 00:16:02.764 "num_base_bdevs_operational": 3, 00:16:02.764 "base_bdevs_list": [ 00:16:02.764 { 00:16:02.764 "name": null, 00:16:02.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.764 "is_configured": false, 00:16:02.764 "data_offset": 0, 00:16:02.764 "data_size": 65536 00:16:02.764 }, 00:16:02.764 { 00:16:02.764 "name": "BaseBdev2", 00:16:02.764 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:02.764 "is_configured": true, 00:16:02.764 "data_offset": 0, 00:16:02.764 "data_size": 65536 00:16:02.764 }, 00:16:02.764 { 00:16:02.764 "name": "BaseBdev3", 00:16:02.764 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:02.764 "is_configured": true, 00:16:02.764 "data_offset": 0, 00:16:02.764 "data_size": 65536 00:16:02.764 }, 00:16:02.764 { 00:16:02.764 "name": "BaseBdev4", 00:16:02.764 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:02.764 "is_configured": true, 00:16:02.764 "data_offset": 0, 00:16:02.764 "data_size": 65536 00:16:02.764 } 00:16:02.764 ] 00:16:02.764 }' 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.764 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.334 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.334 "name": "raid_bdev1", 00:16:03.334 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:03.334 "strip_size_kb": 64, 00:16:03.334 "state": "online", 00:16:03.334 "raid_level": "raid5f", 00:16:03.334 "superblock": false, 00:16:03.334 "num_base_bdevs": 4, 00:16:03.334 "num_base_bdevs_discovered": 3, 00:16:03.334 "num_base_bdevs_operational": 3, 00:16:03.334 "base_bdevs_list": [ 00:16:03.334 { 00:16:03.334 "name": null, 00:16:03.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.334 "is_configured": false, 00:16:03.334 "data_offset": 0, 00:16:03.334 "data_size": 65536 00:16:03.334 }, 00:16:03.334 { 00:16:03.334 "name": "BaseBdev2", 00:16:03.334 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:03.334 "is_configured": true, 00:16:03.334 "data_offset": 0, 00:16:03.334 "data_size": 65536 00:16:03.334 }, 00:16:03.334 { 00:16:03.334 "name": "BaseBdev3", 00:16:03.335 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:03.335 "is_configured": true, 00:16:03.335 "data_offset": 0, 00:16:03.335 "data_size": 65536 00:16:03.335 }, 00:16:03.335 { 00:16:03.335 "name": "BaseBdev4", 00:16:03.335 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:03.335 "is_configured": true, 00:16:03.335 "data_offset": 0, 00:16:03.335 "data_size": 65536 00:16:03.335 } 00:16:03.335 ] 00:16:03.335 }' 00:16:03.335 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.335 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.335 [2024-11-18 10:44:29.054814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.335 [2024-11-18 10:44:29.068478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.335 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.335 [2024-11-18 10:44:29.076995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.274 "name": "raid_bdev1", 00:16:04.274 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:04.274 "strip_size_kb": 64, 00:16:04.274 "state": "online", 00:16:04.274 "raid_level": "raid5f", 00:16:04.274 "superblock": false, 00:16:04.274 "num_base_bdevs": 4, 00:16:04.274 "num_base_bdevs_discovered": 4, 00:16:04.274 "num_base_bdevs_operational": 4, 00:16:04.274 "process": { 00:16:04.274 "type": "rebuild", 00:16:04.274 "target": "spare", 00:16:04.274 "progress": { 00:16:04.274 "blocks": 19200, 00:16:04.274 "percent": 9 00:16:04.274 } 00:16:04.274 }, 00:16:04.274 "base_bdevs_list": [ 00:16:04.274 { 00:16:04.274 "name": "spare", 00:16:04.274 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:04.274 "is_configured": true, 00:16:04.274 "data_offset": 0, 00:16:04.274 "data_size": 65536 00:16:04.274 }, 00:16:04.274 { 00:16:04.274 "name": "BaseBdev2", 00:16:04.274 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:04.274 "is_configured": true, 00:16:04.274 "data_offset": 0, 00:16:04.274 "data_size": 65536 00:16:04.274 }, 00:16:04.274 { 00:16:04.274 "name": "BaseBdev3", 00:16:04.274 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:04.274 "is_configured": true, 00:16:04.274 "data_offset": 0, 00:16:04.274 "data_size": 65536 00:16:04.274 }, 00:16:04.274 { 00:16:04.274 "name": "BaseBdev4", 00:16:04.274 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:04.274 "is_configured": true, 00:16:04.274 "data_offset": 0, 00:16:04.274 "data_size": 65536 00:16:04.274 } 00:16:04.274 ] 00:16:04.274 }' 00:16:04.274 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=612 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.534 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.534 "name": "raid_bdev1", 00:16:04.535 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:04.535 "strip_size_kb": 64, 00:16:04.535 "state": "online", 00:16:04.535 "raid_level": "raid5f", 00:16:04.535 "superblock": false, 00:16:04.535 "num_base_bdevs": 4, 00:16:04.535 "num_base_bdevs_discovered": 4, 00:16:04.535 "num_base_bdevs_operational": 4, 00:16:04.535 "process": { 00:16:04.535 "type": "rebuild", 00:16:04.535 "target": "spare", 00:16:04.535 "progress": { 00:16:04.535 "blocks": 21120, 00:16:04.535 "percent": 10 00:16:04.535 } 00:16:04.535 }, 00:16:04.535 "base_bdevs_list": [ 00:16:04.535 { 00:16:04.535 "name": "spare", 00:16:04.535 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:04.535 "is_configured": true, 00:16:04.535 "data_offset": 0, 00:16:04.535 "data_size": 65536 00:16:04.535 }, 00:16:04.535 { 00:16:04.535 "name": "BaseBdev2", 00:16:04.535 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:04.535 "is_configured": true, 00:16:04.535 "data_offset": 0, 00:16:04.535 "data_size": 65536 00:16:04.535 }, 00:16:04.535 { 00:16:04.535 "name": "BaseBdev3", 00:16:04.535 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:04.535 "is_configured": true, 00:16:04.535 "data_offset": 0, 00:16:04.535 "data_size": 65536 00:16:04.535 }, 00:16:04.535 { 00:16:04.535 "name": "BaseBdev4", 00:16:04.535 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:04.535 "is_configured": true, 00:16:04.535 "data_offset": 0, 00:16:04.535 "data_size": 65536 00:16:04.535 } 00:16:04.535 ] 00:16:04.535 }' 00:16:04.535 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.535 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.535 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.535 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.535 10:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.919 "name": "raid_bdev1", 00:16:05.919 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:05.919 "strip_size_kb": 64, 00:16:05.919 "state": "online", 00:16:05.919 "raid_level": "raid5f", 00:16:05.919 "superblock": false, 00:16:05.919 "num_base_bdevs": 4, 00:16:05.919 "num_base_bdevs_discovered": 4, 00:16:05.919 "num_base_bdevs_operational": 4, 00:16:05.919 "process": { 00:16:05.919 "type": "rebuild", 00:16:05.919 "target": "spare", 00:16:05.919 "progress": { 00:16:05.919 "blocks": 44160, 00:16:05.919 "percent": 22 00:16:05.919 } 00:16:05.919 }, 00:16:05.919 "base_bdevs_list": [ 00:16:05.919 { 00:16:05.919 "name": "spare", 00:16:05.919 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:05.919 "is_configured": true, 00:16:05.919 "data_offset": 0, 00:16:05.919 "data_size": 65536 00:16:05.919 }, 00:16:05.919 { 00:16:05.919 "name": "BaseBdev2", 00:16:05.919 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:05.919 "is_configured": true, 00:16:05.919 "data_offset": 0, 00:16:05.919 "data_size": 65536 00:16:05.919 }, 00:16:05.919 { 00:16:05.919 "name": "BaseBdev3", 00:16:05.919 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:05.919 "is_configured": true, 00:16:05.919 "data_offset": 0, 00:16:05.919 "data_size": 65536 00:16:05.919 }, 00:16:05.919 { 00:16:05.919 "name": "BaseBdev4", 00:16:05.919 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:05.919 "is_configured": true, 00:16:05.919 "data_offset": 0, 00:16:05.919 "data_size": 65536 00:16:05.919 } 00:16:05.919 ] 00:16:05.919 }' 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.919 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.859 "name": "raid_bdev1", 00:16:06.859 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:06.859 "strip_size_kb": 64, 00:16:06.859 "state": "online", 00:16:06.859 "raid_level": "raid5f", 00:16:06.859 "superblock": false, 00:16:06.859 "num_base_bdevs": 4, 00:16:06.859 "num_base_bdevs_discovered": 4, 00:16:06.859 "num_base_bdevs_operational": 4, 00:16:06.859 "process": { 00:16:06.859 "type": "rebuild", 00:16:06.859 "target": "spare", 00:16:06.859 "progress": { 00:16:06.859 "blocks": 65280, 00:16:06.859 "percent": 33 00:16:06.859 } 00:16:06.859 }, 00:16:06.859 "base_bdevs_list": [ 00:16:06.859 { 00:16:06.859 "name": "spare", 00:16:06.859 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:06.859 "is_configured": true, 00:16:06.859 "data_offset": 0, 00:16:06.859 "data_size": 65536 00:16:06.859 }, 00:16:06.859 { 00:16:06.859 "name": "BaseBdev2", 00:16:06.859 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:06.859 "is_configured": true, 00:16:06.859 "data_offset": 0, 00:16:06.859 "data_size": 65536 00:16:06.859 }, 00:16:06.859 { 00:16:06.859 "name": "BaseBdev3", 00:16:06.859 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:06.859 "is_configured": true, 00:16:06.859 "data_offset": 0, 00:16:06.859 "data_size": 65536 00:16:06.859 }, 00:16:06.859 { 00:16:06.859 "name": "BaseBdev4", 00:16:06.859 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:06.859 "is_configured": true, 00:16:06.859 "data_offset": 0, 00:16:06.859 "data_size": 65536 00:16:06.859 } 00:16:06.859 ] 00:16:06.859 }' 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.859 10:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.243 "name": "raid_bdev1", 00:16:08.243 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:08.243 "strip_size_kb": 64, 00:16:08.243 "state": "online", 00:16:08.243 "raid_level": "raid5f", 00:16:08.243 "superblock": false, 00:16:08.243 "num_base_bdevs": 4, 00:16:08.243 "num_base_bdevs_discovered": 4, 00:16:08.243 "num_base_bdevs_operational": 4, 00:16:08.243 "process": { 00:16:08.243 "type": "rebuild", 00:16:08.243 "target": "spare", 00:16:08.243 "progress": { 00:16:08.243 "blocks": 88320, 00:16:08.243 "percent": 44 00:16:08.243 } 00:16:08.243 }, 00:16:08.243 "base_bdevs_list": [ 00:16:08.243 { 00:16:08.243 "name": "spare", 00:16:08.243 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:08.243 "is_configured": true, 00:16:08.243 "data_offset": 0, 00:16:08.243 "data_size": 65536 00:16:08.243 }, 00:16:08.243 { 00:16:08.243 "name": "BaseBdev2", 00:16:08.243 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:08.243 "is_configured": true, 00:16:08.243 "data_offset": 0, 00:16:08.243 "data_size": 65536 00:16:08.243 }, 00:16:08.243 { 00:16:08.243 "name": "BaseBdev3", 00:16:08.243 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:08.243 "is_configured": true, 00:16:08.243 "data_offset": 0, 00:16:08.243 "data_size": 65536 00:16:08.243 }, 00:16:08.243 { 00:16:08.243 "name": "BaseBdev4", 00:16:08.243 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:08.243 "is_configured": true, 00:16:08.243 "data_offset": 0, 00:16:08.243 "data_size": 65536 00:16:08.243 } 00:16:08.243 ] 00:16:08.243 }' 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.243 10:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.184 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.184 "name": "raid_bdev1", 00:16:09.184 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:09.184 "strip_size_kb": 64, 00:16:09.184 "state": "online", 00:16:09.184 "raid_level": "raid5f", 00:16:09.184 "superblock": false, 00:16:09.184 "num_base_bdevs": 4, 00:16:09.184 "num_base_bdevs_discovered": 4, 00:16:09.184 "num_base_bdevs_operational": 4, 00:16:09.184 "process": { 00:16:09.184 "type": "rebuild", 00:16:09.184 "target": "spare", 00:16:09.184 "progress": { 00:16:09.184 "blocks": 109440, 00:16:09.184 "percent": 55 00:16:09.184 } 00:16:09.184 }, 00:16:09.184 "base_bdevs_list": [ 00:16:09.184 { 00:16:09.184 "name": "spare", 00:16:09.184 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:09.184 "is_configured": true, 00:16:09.184 "data_offset": 0, 00:16:09.184 "data_size": 65536 00:16:09.184 }, 00:16:09.184 { 00:16:09.184 "name": "BaseBdev2", 00:16:09.184 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:09.184 "is_configured": true, 00:16:09.184 "data_offset": 0, 00:16:09.184 "data_size": 65536 00:16:09.184 }, 00:16:09.184 { 00:16:09.184 "name": "BaseBdev3", 00:16:09.184 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:09.184 "is_configured": true, 00:16:09.184 "data_offset": 0, 00:16:09.184 "data_size": 65536 00:16:09.184 }, 00:16:09.184 { 00:16:09.185 "name": "BaseBdev4", 00:16:09.185 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:09.185 "is_configured": true, 00:16:09.185 "data_offset": 0, 00:16:09.185 "data_size": 65536 00:16:09.185 } 00:16:09.185 ] 00:16:09.185 }' 00:16:09.185 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.185 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.185 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.185 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.185 10:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.126 10:44:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.386 "name": "raid_bdev1", 00:16:10.386 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:10.386 "strip_size_kb": 64, 00:16:10.386 "state": "online", 00:16:10.386 "raid_level": "raid5f", 00:16:10.386 "superblock": false, 00:16:10.386 "num_base_bdevs": 4, 00:16:10.386 "num_base_bdevs_discovered": 4, 00:16:10.386 "num_base_bdevs_operational": 4, 00:16:10.386 "process": { 00:16:10.386 "type": "rebuild", 00:16:10.386 "target": "spare", 00:16:10.386 "progress": { 00:16:10.386 "blocks": 130560, 00:16:10.386 "percent": 66 00:16:10.386 } 00:16:10.386 }, 00:16:10.386 "base_bdevs_list": [ 00:16:10.386 { 00:16:10.386 "name": "spare", 00:16:10.386 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:10.386 "is_configured": true, 00:16:10.386 "data_offset": 0, 00:16:10.386 "data_size": 65536 00:16:10.386 }, 00:16:10.386 { 00:16:10.386 "name": "BaseBdev2", 00:16:10.386 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:10.386 "is_configured": true, 00:16:10.386 "data_offset": 0, 00:16:10.386 "data_size": 65536 00:16:10.386 }, 00:16:10.386 { 00:16:10.386 "name": "BaseBdev3", 00:16:10.386 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:10.386 "is_configured": true, 00:16:10.386 "data_offset": 0, 00:16:10.386 "data_size": 65536 00:16:10.386 }, 00:16:10.386 { 00:16:10.386 "name": "BaseBdev4", 00:16:10.386 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:10.386 "is_configured": true, 00:16:10.386 "data_offset": 0, 00:16:10.386 "data_size": 65536 00:16:10.386 } 00:16:10.386 ] 00:16:10.386 }' 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.386 10:44:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.336 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.337 "name": "raid_bdev1", 00:16:11.337 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:11.337 "strip_size_kb": 64, 00:16:11.337 "state": "online", 00:16:11.337 "raid_level": "raid5f", 00:16:11.337 "superblock": false, 00:16:11.337 "num_base_bdevs": 4, 00:16:11.337 "num_base_bdevs_discovered": 4, 00:16:11.337 "num_base_bdevs_operational": 4, 00:16:11.337 "process": { 00:16:11.337 "type": "rebuild", 00:16:11.337 "target": "spare", 00:16:11.337 "progress": { 00:16:11.337 "blocks": 153600, 00:16:11.337 "percent": 78 00:16:11.337 } 00:16:11.337 }, 00:16:11.337 "base_bdevs_list": [ 00:16:11.337 { 00:16:11.337 "name": "spare", 00:16:11.337 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:11.337 "is_configured": true, 00:16:11.337 "data_offset": 0, 00:16:11.337 "data_size": 65536 00:16:11.337 }, 00:16:11.337 { 00:16:11.337 "name": "BaseBdev2", 00:16:11.337 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:11.337 "is_configured": true, 00:16:11.337 "data_offset": 0, 00:16:11.337 "data_size": 65536 00:16:11.337 }, 00:16:11.337 { 00:16:11.337 "name": "BaseBdev3", 00:16:11.337 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:11.337 "is_configured": true, 00:16:11.337 "data_offset": 0, 00:16:11.337 "data_size": 65536 00:16:11.337 }, 00:16:11.337 { 00:16:11.337 "name": "BaseBdev4", 00:16:11.337 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:11.337 "is_configured": true, 00:16:11.337 "data_offset": 0, 00:16:11.337 "data_size": 65536 00:16:11.337 } 00:16:11.337 ] 00:16:11.337 }' 00:16:11.337 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.613 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.613 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.613 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.613 10:44:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.553 "name": "raid_bdev1", 00:16:12.553 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:12.553 "strip_size_kb": 64, 00:16:12.553 "state": "online", 00:16:12.553 "raid_level": "raid5f", 00:16:12.553 "superblock": false, 00:16:12.553 "num_base_bdevs": 4, 00:16:12.553 "num_base_bdevs_discovered": 4, 00:16:12.553 "num_base_bdevs_operational": 4, 00:16:12.553 "process": { 00:16:12.553 "type": "rebuild", 00:16:12.553 "target": "spare", 00:16:12.553 "progress": { 00:16:12.553 "blocks": 174720, 00:16:12.553 "percent": 88 00:16:12.553 } 00:16:12.553 }, 00:16:12.553 "base_bdevs_list": [ 00:16:12.553 { 00:16:12.553 "name": "spare", 00:16:12.553 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:12.553 "is_configured": true, 00:16:12.553 "data_offset": 0, 00:16:12.553 "data_size": 65536 00:16:12.553 }, 00:16:12.553 { 00:16:12.553 "name": "BaseBdev2", 00:16:12.553 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:12.553 "is_configured": true, 00:16:12.553 "data_offset": 0, 00:16:12.553 "data_size": 65536 00:16:12.553 }, 00:16:12.553 { 00:16:12.553 "name": "BaseBdev3", 00:16:12.553 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:12.553 "is_configured": true, 00:16:12.553 "data_offset": 0, 00:16:12.553 "data_size": 65536 00:16:12.553 }, 00:16:12.553 { 00:16:12.553 "name": "BaseBdev4", 00:16:12.553 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:12.553 "is_configured": true, 00:16:12.553 "data_offset": 0, 00:16:12.553 "data_size": 65536 00:16:12.553 } 00:16:12.553 ] 00:16:12.553 }' 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.553 10:44:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.936 [2024-11-18 10:44:39.418694] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.936 [2024-11-18 10:44:39.418803] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.936 [2024-11-18 10:44:39.418870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.936 "name": "raid_bdev1", 00:16:13.936 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:13.936 "strip_size_kb": 64, 00:16:13.936 "state": "online", 00:16:13.936 "raid_level": "raid5f", 00:16:13.936 "superblock": false, 00:16:13.936 "num_base_bdevs": 4, 00:16:13.936 "num_base_bdevs_discovered": 4, 00:16:13.936 "num_base_bdevs_operational": 4, 00:16:13.936 "process": { 00:16:13.936 "type": "rebuild", 00:16:13.936 "target": "spare", 00:16:13.936 "progress": { 00:16:13.936 "blocks": 195840, 00:16:13.936 "percent": 99 00:16:13.936 } 00:16:13.936 }, 00:16:13.936 "base_bdevs_list": [ 00:16:13.936 { 00:16:13.936 "name": "spare", 00:16:13.936 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:13.936 "is_configured": true, 00:16:13.936 "data_offset": 0, 00:16:13.936 "data_size": 65536 00:16:13.936 }, 00:16:13.936 { 00:16:13.936 "name": "BaseBdev2", 00:16:13.936 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:13.936 "is_configured": true, 00:16:13.936 "data_offset": 0, 00:16:13.936 "data_size": 65536 00:16:13.936 }, 00:16:13.936 { 00:16:13.936 "name": "BaseBdev3", 00:16:13.936 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:13.936 "is_configured": true, 00:16:13.936 "data_offset": 0, 00:16:13.936 "data_size": 65536 00:16:13.936 }, 00:16:13.936 { 00:16:13.936 "name": "BaseBdev4", 00:16:13.936 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:13.936 "is_configured": true, 00:16:13.936 "data_offset": 0, 00:16:13.936 "data_size": 65536 00:16:13.936 } 00:16:13.936 ] 00:16:13.936 }' 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.936 10:44:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.873 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.873 "name": "raid_bdev1", 00:16:14.873 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:14.873 "strip_size_kb": 64, 00:16:14.873 "state": "online", 00:16:14.873 "raid_level": "raid5f", 00:16:14.873 "superblock": false, 00:16:14.873 "num_base_bdevs": 4, 00:16:14.873 "num_base_bdevs_discovered": 4, 00:16:14.873 "num_base_bdevs_operational": 4, 00:16:14.873 "base_bdevs_list": [ 00:16:14.873 { 00:16:14.873 "name": "spare", 00:16:14.873 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:14.873 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 }, 00:16:14.874 { 00:16:14.874 "name": "BaseBdev2", 00:16:14.874 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 }, 00:16:14.874 { 00:16:14.874 "name": "BaseBdev3", 00:16:14.874 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 }, 00:16:14.874 { 00:16:14.874 "name": "BaseBdev4", 00:16:14.874 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 } 00:16:14.874 ] 00:16:14.874 }' 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.874 "name": "raid_bdev1", 00:16:14.874 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:14.874 "strip_size_kb": 64, 00:16:14.874 "state": "online", 00:16:14.874 "raid_level": "raid5f", 00:16:14.874 "superblock": false, 00:16:14.874 "num_base_bdevs": 4, 00:16:14.874 "num_base_bdevs_discovered": 4, 00:16:14.874 "num_base_bdevs_operational": 4, 00:16:14.874 "base_bdevs_list": [ 00:16:14.874 { 00:16:14.874 "name": "spare", 00:16:14.874 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 }, 00:16:14.874 { 00:16:14.874 "name": "BaseBdev2", 00:16:14.874 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 }, 00:16:14.874 { 00:16:14.874 "name": "BaseBdev3", 00:16:14.874 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 }, 00:16:14.874 { 00:16:14.874 "name": "BaseBdev4", 00:16:14.874 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:14.874 "is_configured": true, 00:16:14.874 "data_offset": 0, 00:16:14.874 "data_size": 65536 00:16:14.874 } 00:16:14.874 ] 00:16:14.874 }' 00:16:14.874 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.133 "name": "raid_bdev1", 00:16:15.133 "uuid": "87f66b6c-1fdd-4a04-87c1-bb02658b5110", 00:16:15.133 "strip_size_kb": 64, 00:16:15.133 "state": "online", 00:16:15.133 "raid_level": "raid5f", 00:16:15.133 "superblock": false, 00:16:15.133 "num_base_bdevs": 4, 00:16:15.133 "num_base_bdevs_discovered": 4, 00:16:15.133 "num_base_bdevs_operational": 4, 00:16:15.133 "base_bdevs_list": [ 00:16:15.133 { 00:16:15.133 "name": "spare", 00:16:15.133 "uuid": "f67bb9ea-5372-5fda-8c2c-ea1bc7856071", 00:16:15.133 "is_configured": true, 00:16:15.133 "data_offset": 0, 00:16:15.133 "data_size": 65536 00:16:15.133 }, 00:16:15.133 { 00:16:15.133 "name": "BaseBdev2", 00:16:15.133 "uuid": "929df77f-ad1b-52f4-8633-3db7c3cdea85", 00:16:15.133 "is_configured": true, 00:16:15.133 "data_offset": 0, 00:16:15.133 "data_size": 65536 00:16:15.133 }, 00:16:15.133 { 00:16:15.133 "name": "BaseBdev3", 00:16:15.133 "uuid": "2ca64579-c59d-5940-bbbe-dd3457b108da", 00:16:15.133 "is_configured": true, 00:16:15.133 "data_offset": 0, 00:16:15.133 "data_size": 65536 00:16:15.133 }, 00:16:15.133 { 00:16:15.133 "name": "BaseBdev4", 00:16:15.133 "uuid": "bfb4a666-a7a6-5ffc-8cfc-d9b587005d75", 00:16:15.133 "is_configured": true, 00:16:15.133 "data_offset": 0, 00:16:15.133 "data_size": 65536 00:16:15.133 } 00:16:15.133 ] 00:16:15.133 }' 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.133 10:44:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.393 [2024-11-18 10:44:41.260613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.393 [2024-11-18 10:44:41.260651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.393 [2024-11-18 10:44:41.260728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.393 [2024-11-18 10:44:41.260811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.393 [2024-11-18 10:44:41.260820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.393 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:15.653 /dev/nbd0 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.653 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.913 1+0 records in 00:16:15.913 1+0 records out 00:16:15.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343015 s, 11.9 MB/s 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:15.913 /dev/nbd1 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.913 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.173 1+0 records in 00:16:16.173 1+0 records out 00:16:16.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033957 s, 12.1 MB/s 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.173 10:44:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.433 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84395 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84395 ']' 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84395 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.693 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84395 00:16:16.693 killing process with pid 84395 00:16:16.693 Received shutdown signal, test time was about 60.000000 seconds 00:16:16.693 00:16:16.693 Latency(us) 00:16:16.693 [2024-11-18T10:44:42.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.693 [2024-11-18T10:44:42.578Z] =================================================================================================================== 00:16:16.694 [2024-11-18T10:44:42.579Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:16.694 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.694 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.694 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84395' 00:16:16.694 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84395 00:16:16.694 [2024-11-18 10:44:42.454471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.694 10:44:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84395 00:16:17.263 [2024-11-18 10:44:42.918644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.206 10:44:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:18.206 00:16:18.206 real 0m20.098s 00:16:18.206 user 0m23.933s 00:16:18.206 sys 0m2.406s 00:16:18.206 10:44:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.206 10:44:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.206 ************************************ 00:16:18.206 END TEST raid5f_rebuild_test 00:16:18.206 ************************************ 00:16:18.206 10:44:44 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:18.206 10:44:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:18.206 10:44:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.206 10:44:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.206 ************************************ 00:16:18.206 START TEST raid5f_rebuild_test_sb 00:16:18.206 ************************************ 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84913 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84913 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84913 ']' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.206 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.466 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:18.466 Zero copy mechanism will not be used. 00:16:18.466 [2024-11-18 10:44:44.140990] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:18.466 [2024-11-18 10:44:44.141116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84913 ] 00:16:18.466 [2024-11-18 10:44:44.320879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.726 [2024-11-18 10:44:44.430210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.986 [2024-11-18 10:44:44.631139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.986 [2024-11-18 10:44:44.631199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.246 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.246 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:19.246 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.246 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:19.246 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.246 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.246 BaseBdev1_malloc 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.246 [2024-11-18 10:44:45.012128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:19.246 [2024-11-18 10:44:45.012224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.246 [2024-11-18 10:44:45.012248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:19.246 [2024-11-18 10:44:45.012260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.246 [2024-11-18 10:44:45.014297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.246 [2024-11-18 10:44:45.014334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:19.246 BaseBdev1 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.246 BaseBdev2_malloc 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.246 [2024-11-18 10:44:45.067049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:19.246 [2024-11-18 10:44:45.067102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.246 [2024-11-18 10:44:45.067119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:19.246 [2024-11-18 10:44:45.067131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.246 [2024-11-18 10:44:45.069179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.246 [2024-11-18 10:44:45.069224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:19.246 BaseBdev2 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.246 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.506 BaseBdev3_malloc 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.506 [2024-11-18 10:44:45.150734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:19.506 [2024-11-18 10:44:45.150784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.506 [2024-11-18 10:44:45.150805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:19.506 [2024-11-18 10:44:45.150816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.506 [2024-11-18 10:44:45.152869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.506 [2024-11-18 10:44:45.152944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:19.506 BaseBdev3 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.506 BaseBdev4_malloc 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.506 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.506 [2024-11-18 10:44:45.201696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:19.506 [2024-11-18 10:44:45.201745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.506 [2024-11-18 10:44:45.201762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:19.506 [2024-11-18 10:44:45.201772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.506 [2024-11-18 10:44:45.203670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.506 [2024-11-18 10:44:45.203712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:19.506 BaseBdev4 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.507 spare_malloc 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.507 spare_delay 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.507 [2024-11-18 10:44:45.268814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.507 [2024-11-18 10:44:45.268906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.507 [2024-11-18 10:44:45.268928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:19.507 [2024-11-18 10:44:45.268938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.507 [2024-11-18 10:44:45.270902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.507 [2024-11-18 10:44:45.270942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.507 spare 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.507 [2024-11-18 10:44:45.280847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.507 [2024-11-18 10:44:45.282574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.507 [2024-11-18 10:44:45.282632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.507 [2024-11-18 10:44:45.282681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.507 [2024-11-18 10:44:45.282871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:19.507 [2024-11-18 10:44:45.282887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.507 [2024-11-18 10:44:45.283098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:19.507 [2024-11-18 10:44:45.290008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:19.507 [2024-11-18 10:44:45.290029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:19.507 [2024-11-18 10:44:45.290224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.507 "name": "raid_bdev1", 00:16:19.507 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:19.507 "strip_size_kb": 64, 00:16:19.507 "state": "online", 00:16:19.507 "raid_level": "raid5f", 00:16:19.507 "superblock": true, 00:16:19.507 "num_base_bdevs": 4, 00:16:19.507 "num_base_bdevs_discovered": 4, 00:16:19.507 "num_base_bdevs_operational": 4, 00:16:19.507 "base_bdevs_list": [ 00:16:19.507 { 00:16:19.507 "name": "BaseBdev1", 00:16:19.507 "uuid": "0196ae1c-1397-548d-a873-cdaefad3a6a5", 00:16:19.507 "is_configured": true, 00:16:19.507 "data_offset": 2048, 00:16:19.507 "data_size": 63488 00:16:19.507 }, 00:16:19.507 { 00:16:19.507 "name": "BaseBdev2", 00:16:19.507 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:19.507 "is_configured": true, 00:16:19.507 "data_offset": 2048, 00:16:19.507 "data_size": 63488 00:16:19.507 }, 00:16:19.507 { 00:16:19.507 "name": "BaseBdev3", 00:16:19.507 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:19.507 "is_configured": true, 00:16:19.507 "data_offset": 2048, 00:16:19.507 "data_size": 63488 00:16:19.507 }, 00:16:19.507 { 00:16:19.507 "name": "BaseBdev4", 00:16:19.507 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:19.507 "is_configured": true, 00:16:19.507 "data_offset": 2048, 00:16:19.507 "data_size": 63488 00:16:19.507 } 00:16:19.507 ] 00:16:19.507 }' 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.507 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.078 [2024-11-18 10:44:45.749472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.078 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:20.338 [2024-11-18 10:44:46.020878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:20.338 /dev/nbd0 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.338 1+0 records in 00:16:20.338 1+0 records out 00:16:20.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570857 s, 7.2 MB/s 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:20.338 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:20.909 496+0 records in 00:16:20.909 496+0 records out 00:16:20.909 97517568 bytes (98 MB, 93 MiB) copied, 0.563988 s, 173 MB/s 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.909 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.169 [2024-11-18 10:44:46.904663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 [2024-11-18 10:44:46.921665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.169 "name": "raid_bdev1", 00:16:21.169 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:21.169 "strip_size_kb": 64, 00:16:21.169 "state": "online", 00:16:21.169 "raid_level": "raid5f", 00:16:21.169 "superblock": true, 00:16:21.169 "num_base_bdevs": 4, 00:16:21.169 "num_base_bdevs_discovered": 3, 00:16:21.169 "num_base_bdevs_operational": 3, 00:16:21.169 "base_bdevs_list": [ 00:16:21.169 { 00:16:21.169 "name": null, 00:16:21.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.169 "is_configured": false, 00:16:21.169 "data_offset": 0, 00:16:21.169 "data_size": 63488 00:16:21.169 }, 00:16:21.169 { 00:16:21.169 "name": "BaseBdev2", 00:16:21.169 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:21.169 "is_configured": true, 00:16:21.169 "data_offset": 2048, 00:16:21.169 "data_size": 63488 00:16:21.169 }, 00:16:21.169 { 00:16:21.169 "name": "BaseBdev3", 00:16:21.169 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:21.169 "is_configured": true, 00:16:21.169 "data_offset": 2048, 00:16:21.169 "data_size": 63488 00:16:21.169 }, 00:16:21.169 { 00:16:21.169 "name": "BaseBdev4", 00:16:21.169 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:21.169 "is_configured": true, 00:16:21.169 "data_offset": 2048, 00:16:21.169 "data_size": 63488 00:16:21.169 } 00:16:21.169 ] 00:16:21.169 }' 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.169 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.739 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.739 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.739 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.739 [2024-11-18 10:44:47.396899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.739 [2024-11-18 10:44:47.411579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:21.739 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.739 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:21.739 [2024-11-18 10:44:47.420716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.679 "name": "raid_bdev1", 00:16:22.679 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:22.679 "strip_size_kb": 64, 00:16:22.679 "state": "online", 00:16:22.679 "raid_level": "raid5f", 00:16:22.679 "superblock": true, 00:16:22.679 "num_base_bdevs": 4, 00:16:22.679 "num_base_bdevs_discovered": 4, 00:16:22.679 "num_base_bdevs_operational": 4, 00:16:22.679 "process": { 00:16:22.679 "type": "rebuild", 00:16:22.679 "target": "spare", 00:16:22.679 "progress": { 00:16:22.679 "blocks": 19200, 00:16:22.679 "percent": 10 00:16:22.679 } 00:16:22.679 }, 00:16:22.679 "base_bdevs_list": [ 00:16:22.679 { 00:16:22.679 "name": "spare", 00:16:22.679 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:22.679 "is_configured": true, 00:16:22.679 "data_offset": 2048, 00:16:22.679 "data_size": 63488 00:16:22.679 }, 00:16:22.679 { 00:16:22.679 "name": "BaseBdev2", 00:16:22.679 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:22.679 "is_configured": true, 00:16:22.679 "data_offset": 2048, 00:16:22.679 "data_size": 63488 00:16:22.679 }, 00:16:22.679 { 00:16:22.679 "name": "BaseBdev3", 00:16:22.679 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:22.679 "is_configured": true, 00:16:22.679 "data_offset": 2048, 00:16:22.679 "data_size": 63488 00:16:22.679 }, 00:16:22.679 { 00:16:22.679 "name": "BaseBdev4", 00:16:22.679 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:22.679 "is_configured": true, 00:16:22.679 "data_offset": 2048, 00:16:22.679 "data_size": 63488 00:16:22.679 } 00:16:22.679 ] 00:16:22.679 }' 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.679 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.679 [2024-11-18 10:44:48.551380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.938 [2024-11-18 10:44:48.626118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.938 [2024-11-18 10:44:48.626193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.938 [2024-11-18 10:44:48.626225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.938 [2024-11-18 10:44:48.626234] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.938 "name": "raid_bdev1", 00:16:22.938 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:22.938 "strip_size_kb": 64, 00:16:22.938 "state": "online", 00:16:22.938 "raid_level": "raid5f", 00:16:22.938 "superblock": true, 00:16:22.938 "num_base_bdevs": 4, 00:16:22.938 "num_base_bdevs_discovered": 3, 00:16:22.938 "num_base_bdevs_operational": 3, 00:16:22.938 "base_bdevs_list": [ 00:16:22.938 { 00:16:22.938 "name": null, 00:16:22.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.938 "is_configured": false, 00:16:22.938 "data_offset": 0, 00:16:22.938 "data_size": 63488 00:16:22.938 }, 00:16:22.938 { 00:16:22.938 "name": "BaseBdev2", 00:16:22.938 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:22.938 "is_configured": true, 00:16:22.938 "data_offset": 2048, 00:16:22.938 "data_size": 63488 00:16:22.938 }, 00:16:22.938 { 00:16:22.938 "name": "BaseBdev3", 00:16:22.938 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:22.938 "is_configured": true, 00:16:22.938 "data_offset": 2048, 00:16:22.938 "data_size": 63488 00:16:22.938 }, 00:16:22.938 { 00:16:22.938 "name": "BaseBdev4", 00:16:22.938 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:22.938 "is_configured": true, 00:16:22.938 "data_offset": 2048, 00:16:22.938 "data_size": 63488 00:16:22.938 } 00:16:22.938 ] 00:16:22.938 }' 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.938 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.198 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.198 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.198 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.198 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.198 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.198 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.199 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.199 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.199 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.473 "name": "raid_bdev1", 00:16:23.473 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:23.473 "strip_size_kb": 64, 00:16:23.473 "state": "online", 00:16:23.473 "raid_level": "raid5f", 00:16:23.473 "superblock": true, 00:16:23.473 "num_base_bdevs": 4, 00:16:23.473 "num_base_bdevs_discovered": 3, 00:16:23.473 "num_base_bdevs_operational": 3, 00:16:23.473 "base_bdevs_list": [ 00:16:23.473 { 00:16:23.473 "name": null, 00:16:23.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.473 "is_configured": false, 00:16:23.473 "data_offset": 0, 00:16:23.473 "data_size": 63488 00:16:23.473 }, 00:16:23.473 { 00:16:23.473 "name": "BaseBdev2", 00:16:23.473 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:23.473 "is_configured": true, 00:16:23.473 "data_offset": 2048, 00:16:23.473 "data_size": 63488 00:16:23.473 }, 00:16:23.473 { 00:16:23.473 "name": "BaseBdev3", 00:16:23.473 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:23.473 "is_configured": true, 00:16:23.473 "data_offset": 2048, 00:16:23.473 "data_size": 63488 00:16:23.473 }, 00:16:23.473 { 00:16:23.473 "name": "BaseBdev4", 00:16:23.473 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:23.473 "is_configured": true, 00:16:23.473 "data_offset": 2048, 00:16:23.473 "data_size": 63488 00:16:23.473 } 00:16:23.473 ] 00:16:23.473 }' 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.473 [2024-11-18 10:44:49.221394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.473 [2024-11-18 10:44:49.235277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.473 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:23.473 [2024-11-18 10:44:49.243793] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.411 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.411 "name": "raid_bdev1", 00:16:24.411 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:24.411 "strip_size_kb": 64, 00:16:24.411 "state": "online", 00:16:24.411 "raid_level": "raid5f", 00:16:24.411 "superblock": true, 00:16:24.411 "num_base_bdevs": 4, 00:16:24.411 "num_base_bdevs_discovered": 4, 00:16:24.411 "num_base_bdevs_operational": 4, 00:16:24.411 "process": { 00:16:24.411 "type": "rebuild", 00:16:24.411 "target": "spare", 00:16:24.411 "progress": { 00:16:24.411 "blocks": 19200, 00:16:24.411 "percent": 10 00:16:24.411 } 00:16:24.411 }, 00:16:24.411 "base_bdevs_list": [ 00:16:24.411 { 00:16:24.411 "name": "spare", 00:16:24.411 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:24.411 "is_configured": true, 00:16:24.411 "data_offset": 2048, 00:16:24.411 "data_size": 63488 00:16:24.411 }, 00:16:24.411 { 00:16:24.411 "name": "BaseBdev2", 00:16:24.411 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:24.411 "is_configured": true, 00:16:24.411 "data_offset": 2048, 00:16:24.411 "data_size": 63488 00:16:24.411 }, 00:16:24.411 { 00:16:24.411 "name": "BaseBdev3", 00:16:24.411 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:24.411 "is_configured": true, 00:16:24.411 "data_offset": 2048, 00:16:24.411 "data_size": 63488 00:16:24.411 }, 00:16:24.411 { 00:16:24.411 "name": "BaseBdev4", 00:16:24.411 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:24.411 "is_configured": true, 00:16:24.411 "data_offset": 2048, 00:16:24.411 "data_size": 63488 00:16:24.411 } 00:16:24.411 ] 00:16:24.411 }' 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:24.670 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=632 00:16:24.670 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.671 "name": "raid_bdev1", 00:16:24.671 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:24.671 "strip_size_kb": 64, 00:16:24.671 "state": "online", 00:16:24.671 "raid_level": "raid5f", 00:16:24.671 "superblock": true, 00:16:24.671 "num_base_bdevs": 4, 00:16:24.671 "num_base_bdevs_discovered": 4, 00:16:24.671 "num_base_bdevs_operational": 4, 00:16:24.671 "process": { 00:16:24.671 "type": "rebuild", 00:16:24.671 "target": "spare", 00:16:24.671 "progress": { 00:16:24.671 "blocks": 21120, 00:16:24.671 "percent": 11 00:16:24.671 } 00:16:24.671 }, 00:16:24.671 "base_bdevs_list": [ 00:16:24.671 { 00:16:24.671 "name": "spare", 00:16:24.671 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:24.671 "is_configured": true, 00:16:24.671 "data_offset": 2048, 00:16:24.671 "data_size": 63488 00:16:24.671 }, 00:16:24.671 { 00:16:24.671 "name": "BaseBdev2", 00:16:24.671 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:24.671 "is_configured": true, 00:16:24.671 "data_offset": 2048, 00:16:24.671 "data_size": 63488 00:16:24.671 }, 00:16:24.671 { 00:16:24.671 "name": "BaseBdev3", 00:16:24.671 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:24.671 "is_configured": true, 00:16:24.671 "data_offset": 2048, 00:16:24.671 "data_size": 63488 00:16:24.671 }, 00:16:24.671 { 00:16:24.671 "name": "BaseBdev4", 00:16:24.671 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:24.671 "is_configured": true, 00:16:24.671 "data_offset": 2048, 00:16:24.671 "data_size": 63488 00:16:24.671 } 00:16:24.671 ] 00:16:24.671 }' 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.671 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.054 "name": "raid_bdev1", 00:16:26.054 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:26.054 "strip_size_kb": 64, 00:16:26.054 "state": "online", 00:16:26.054 "raid_level": "raid5f", 00:16:26.054 "superblock": true, 00:16:26.054 "num_base_bdevs": 4, 00:16:26.054 "num_base_bdevs_discovered": 4, 00:16:26.054 "num_base_bdevs_operational": 4, 00:16:26.054 "process": { 00:16:26.054 "type": "rebuild", 00:16:26.054 "target": "spare", 00:16:26.054 "progress": { 00:16:26.054 "blocks": 42240, 00:16:26.054 "percent": 22 00:16:26.054 } 00:16:26.054 }, 00:16:26.054 "base_bdevs_list": [ 00:16:26.054 { 00:16:26.054 "name": "spare", 00:16:26.054 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:26.054 "is_configured": true, 00:16:26.054 "data_offset": 2048, 00:16:26.054 "data_size": 63488 00:16:26.054 }, 00:16:26.054 { 00:16:26.054 "name": "BaseBdev2", 00:16:26.054 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:26.054 "is_configured": true, 00:16:26.054 "data_offset": 2048, 00:16:26.054 "data_size": 63488 00:16:26.054 }, 00:16:26.054 { 00:16:26.054 "name": "BaseBdev3", 00:16:26.054 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:26.054 "is_configured": true, 00:16:26.054 "data_offset": 2048, 00:16:26.054 "data_size": 63488 00:16:26.054 }, 00:16:26.054 { 00:16:26.054 "name": "BaseBdev4", 00:16:26.054 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:26.054 "is_configured": true, 00:16:26.054 "data_offset": 2048, 00:16:26.054 "data_size": 63488 00:16:26.054 } 00:16:26.054 ] 00:16:26.054 }' 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.054 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.994 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.994 "name": "raid_bdev1", 00:16:26.995 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:26.995 "strip_size_kb": 64, 00:16:26.995 "state": "online", 00:16:26.995 "raid_level": "raid5f", 00:16:26.995 "superblock": true, 00:16:26.995 "num_base_bdevs": 4, 00:16:26.995 "num_base_bdevs_discovered": 4, 00:16:26.995 "num_base_bdevs_operational": 4, 00:16:26.995 "process": { 00:16:26.995 "type": "rebuild", 00:16:26.995 "target": "spare", 00:16:26.995 "progress": { 00:16:26.995 "blocks": 65280, 00:16:26.995 "percent": 34 00:16:26.995 } 00:16:26.995 }, 00:16:26.995 "base_bdevs_list": [ 00:16:26.995 { 00:16:26.995 "name": "spare", 00:16:26.995 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:26.995 "is_configured": true, 00:16:26.995 "data_offset": 2048, 00:16:26.995 "data_size": 63488 00:16:26.995 }, 00:16:26.995 { 00:16:26.995 "name": "BaseBdev2", 00:16:26.995 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:26.995 "is_configured": true, 00:16:26.995 "data_offset": 2048, 00:16:26.995 "data_size": 63488 00:16:26.995 }, 00:16:26.995 { 00:16:26.995 "name": "BaseBdev3", 00:16:26.995 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:26.995 "is_configured": true, 00:16:26.995 "data_offset": 2048, 00:16:26.995 "data_size": 63488 00:16:26.995 }, 00:16:26.995 { 00:16:26.995 "name": "BaseBdev4", 00:16:26.995 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:26.995 "is_configured": true, 00:16:26.995 "data_offset": 2048, 00:16:26.995 "data_size": 63488 00:16:26.995 } 00:16:26.995 ] 00:16:26.995 }' 00:16:26.995 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.995 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.995 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.995 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.995 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.377 "name": "raid_bdev1", 00:16:28.377 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:28.377 "strip_size_kb": 64, 00:16:28.377 "state": "online", 00:16:28.377 "raid_level": "raid5f", 00:16:28.377 "superblock": true, 00:16:28.377 "num_base_bdevs": 4, 00:16:28.377 "num_base_bdevs_discovered": 4, 00:16:28.377 "num_base_bdevs_operational": 4, 00:16:28.377 "process": { 00:16:28.377 "type": "rebuild", 00:16:28.377 "target": "spare", 00:16:28.377 "progress": { 00:16:28.377 "blocks": 86400, 00:16:28.377 "percent": 45 00:16:28.377 } 00:16:28.377 }, 00:16:28.377 "base_bdevs_list": [ 00:16:28.377 { 00:16:28.377 "name": "spare", 00:16:28.377 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:28.377 "is_configured": true, 00:16:28.377 "data_offset": 2048, 00:16:28.377 "data_size": 63488 00:16:28.377 }, 00:16:28.377 { 00:16:28.377 "name": "BaseBdev2", 00:16:28.377 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:28.377 "is_configured": true, 00:16:28.377 "data_offset": 2048, 00:16:28.377 "data_size": 63488 00:16:28.377 }, 00:16:28.377 { 00:16:28.377 "name": "BaseBdev3", 00:16:28.377 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:28.377 "is_configured": true, 00:16:28.377 "data_offset": 2048, 00:16:28.377 "data_size": 63488 00:16:28.377 }, 00:16:28.377 { 00:16:28.377 "name": "BaseBdev4", 00:16:28.377 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:28.377 "is_configured": true, 00:16:28.377 "data_offset": 2048, 00:16:28.377 "data_size": 63488 00:16:28.377 } 00:16:28.377 ] 00:16:28.377 }' 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.377 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.317 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.317 "name": "raid_bdev1", 00:16:29.317 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:29.317 "strip_size_kb": 64, 00:16:29.317 "state": "online", 00:16:29.317 "raid_level": "raid5f", 00:16:29.317 "superblock": true, 00:16:29.317 "num_base_bdevs": 4, 00:16:29.317 "num_base_bdevs_discovered": 4, 00:16:29.317 "num_base_bdevs_operational": 4, 00:16:29.317 "process": { 00:16:29.317 "type": "rebuild", 00:16:29.317 "target": "spare", 00:16:29.317 "progress": { 00:16:29.317 "blocks": 109440, 00:16:29.317 "percent": 57 00:16:29.317 } 00:16:29.317 }, 00:16:29.317 "base_bdevs_list": [ 00:16:29.317 { 00:16:29.317 "name": "spare", 00:16:29.317 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:29.317 "is_configured": true, 00:16:29.317 "data_offset": 2048, 00:16:29.317 "data_size": 63488 00:16:29.317 }, 00:16:29.317 { 00:16:29.317 "name": "BaseBdev2", 00:16:29.317 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:29.317 "is_configured": true, 00:16:29.317 "data_offset": 2048, 00:16:29.317 "data_size": 63488 00:16:29.317 }, 00:16:29.317 { 00:16:29.317 "name": "BaseBdev3", 00:16:29.317 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:29.317 "is_configured": true, 00:16:29.317 "data_offset": 2048, 00:16:29.317 "data_size": 63488 00:16:29.317 }, 00:16:29.317 { 00:16:29.317 "name": "BaseBdev4", 00:16:29.317 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:29.317 "is_configured": true, 00:16:29.317 "data_offset": 2048, 00:16:29.317 "data_size": 63488 00:16:29.317 } 00:16:29.317 ] 00:16:29.317 }' 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.317 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.284 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.285 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.285 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.285 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.285 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.285 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.285 "name": "raid_bdev1", 00:16:30.285 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:30.285 "strip_size_kb": 64, 00:16:30.285 "state": "online", 00:16:30.285 "raid_level": "raid5f", 00:16:30.285 "superblock": true, 00:16:30.285 "num_base_bdevs": 4, 00:16:30.285 "num_base_bdevs_discovered": 4, 00:16:30.285 "num_base_bdevs_operational": 4, 00:16:30.285 "process": { 00:16:30.285 "type": "rebuild", 00:16:30.285 "target": "spare", 00:16:30.285 "progress": { 00:16:30.285 "blocks": 130560, 00:16:30.285 "percent": 68 00:16:30.285 } 00:16:30.285 }, 00:16:30.285 "base_bdevs_list": [ 00:16:30.285 { 00:16:30.285 "name": "spare", 00:16:30.285 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:30.285 "is_configured": true, 00:16:30.285 "data_offset": 2048, 00:16:30.285 "data_size": 63488 00:16:30.285 }, 00:16:30.285 { 00:16:30.285 "name": "BaseBdev2", 00:16:30.285 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:30.285 "is_configured": true, 00:16:30.285 "data_offset": 2048, 00:16:30.285 "data_size": 63488 00:16:30.285 }, 00:16:30.285 { 00:16:30.285 "name": "BaseBdev3", 00:16:30.285 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:30.285 "is_configured": true, 00:16:30.285 "data_offset": 2048, 00:16:30.285 "data_size": 63488 00:16:30.285 }, 00:16:30.285 { 00:16:30.285 "name": "BaseBdev4", 00:16:30.285 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:30.285 "is_configured": true, 00:16:30.285 "data_offset": 2048, 00:16:30.285 "data_size": 63488 00:16:30.285 } 00:16:30.285 ] 00:16:30.285 }' 00:16:30.285 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.545 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.545 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.545 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.545 10:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.484 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.484 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.484 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.485 "name": "raid_bdev1", 00:16:31.485 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:31.485 "strip_size_kb": 64, 00:16:31.485 "state": "online", 00:16:31.485 "raid_level": "raid5f", 00:16:31.485 "superblock": true, 00:16:31.485 "num_base_bdevs": 4, 00:16:31.485 "num_base_bdevs_discovered": 4, 00:16:31.485 "num_base_bdevs_operational": 4, 00:16:31.485 "process": { 00:16:31.485 "type": "rebuild", 00:16:31.485 "target": "spare", 00:16:31.485 "progress": { 00:16:31.485 "blocks": 153600, 00:16:31.485 "percent": 80 00:16:31.485 } 00:16:31.485 }, 00:16:31.485 "base_bdevs_list": [ 00:16:31.485 { 00:16:31.485 "name": "spare", 00:16:31.485 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:31.485 "is_configured": true, 00:16:31.485 "data_offset": 2048, 00:16:31.485 "data_size": 63488 00:16:31.485 }, 00:16:31.485 { 00:16:31.485 "name": "BaseBdev2", 00:16:31.485 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:31.485 "is_configured": true, 00:16:31.485 "data_offset": 2048, 00:16:31.485 "data_size": 63488 00:16:31.485 }, 00:16:31.485 { 00:16:31.485 "name": "BaseBdev3", 00:16:31.485 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:31.485 "is_configured": true, 00:16:31.485 "data_offset": 2048, 00:16:31.485 "data_size": 63488 00:16:31.485 }, 00:16:31.485 { 00:16:31.485 "name": "BaseBdev4", 00:16:31.485 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:31.485 "is_configured": true, 00:16:31.485 "data_offset": 2048, 00:16:31.485 "data_size": 63488 00:16:31.485 } 00:16:31.485 ] 00:16:31.485 }' 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.485 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.745 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.745 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.745 10:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.684 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.685 "name": "raid_bdev1", 00:16:32.685 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:32.685 "strip_size_kb": 64, 00:16:32.685 "state": "online", 00:16:32.685 "raid_level": "raid5f", 00:16:32.685 "superblock": true, 00:16:32.685 "num_base_bdevs": 4, 00:16:32.685 "num_base_bdevs_discovered": 4, 00:16:32.685 "num_base_bdevs_operational": 4, 00:16:32.685 "process": { 00:16:32.685 "type": "rebuild", 00:16:32.685 "target": "spare", 00:16:32.685 "progress": { 00:16:32.685 "blocks": 174720, 00:16:32.685 "percent": 91 00:16:32.685 } 00:16:32.685 }, 00:16:32.685 "base_bdevs_list": [ 00:16:32.685 { 00:16:32.685 "name": "spare", 00:16:32.685 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:32.685 "is_configured": true, 00:16:32.685 "data_offset": 2048, 00:16:32.685 "data_size": 63488 00:16:32.685 }, 00:16:32.685 { 00:16:32.685 "name": "BaseBdev2", 00:16:32.685 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:32.685 "is_configured": true, 00:16:32.685 "data_offset": 2048, 00:16:32.685 "data_size": 63488 00:16:32.685 }, 00:16:32.685 { 00:16:32.685 "name": "BaseBdev3", 00:16:32.685 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:32.685 "is_configured": true, 00:16:32.685 "data_offset": 2048, 00:16:32.685 "data_size": 63488 00:16:32.685 }, 00:16:32.685 { 00:16:32.685 "name": "BaseBdev4", 00:16:32.685 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:32.685 "is_configured": true, 00:16:32.685 "data_offset": 2048, 00:16:32.685 "data_size": 63488 00:16:32.685 } 00:16:32.685 ] 00:16:32.685 }' 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.685 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.945 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.945 10:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.515 [2024-11-18 10:44:59.283222] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:33.515 [2024-11-18 10:44:59.283311] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:33.515 [2024-11-18 10:44:59.283427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.774 "name": "raid_bdev1", 00:16:33.774 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:33.774 "strip_size_kb": 64, 00:16:33.774 "state": "online", 00:16:33.774 "raid_level": "raid5f", 00:16:33.774 "superblock": true, 00:16:33.774 "num_base_bdevs": 4, 00:16:33.774 "num_base_bdevs_discovered": 4, 00:16:33.774 "num_base_bdevs_operational": 4, 00:16:33.774 "base_bdevs_list": [ 00:16:33.774 { 00:16:33.774 "name": "spare", 00:16:33.774 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "BaseBdev2", 00:16:33.774 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "BaseBdev3", 00:16:33.774 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 }, 00:16:33.774 { 00:16:33.774 "name": "BaseBdev4", 00:16:33.774 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:33.774 "is_configured": true, 00:16:33.774 "data_offset": 2048, 00:16:33.774 "data_size": 63488 00:16:33.774 } 00:16:33.774 ] 00:16:33.774 }' 00:16:33.774 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.034 "name": "raid_bdev1", 00:16:34.034 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:34.034 "strip_size_kb": 64, 00:16:34.034 "state": "online", 00:16:34.034 "raid_level": "raid5f", 00:16:34.034 "superblock": true, 00:16:34.034 "num_base_bdevs": 4, 00:16:34.034 "num_base_bdevs_discovered": 4, 00:16:34.034 "num_base_bdevs_operational": 4, 00:16:34.034 "base_bdevs_list": [ 00:16:34.034 { 00:16:34.034 "name": "spare", 00:16:34.034 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:34.034 "is_configured": true, 00:16:34.034 "data_offset": 2048, 00:16:34.034 "data_size": 63488 00:16:34.034 }, 00:16:34.034 { 00:16:34.034 "name": "BaseBdev2", 00:16:34.034 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:34.034 "is_configured": true, 00:16:34.034 "data_offset": 2048, 00:16:34.034 "data_size": 63488 00:16:34.034 }, 00:16:34.034 { 00:16:34.034 "name": "BaseBdev3", 00:16:34.034 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:34.034 "is_configured": true, 00:16:34.034 "data_offset": 2048, 00:16:34.034 "data_size": 63488 00:16:34.034 }, 00:16:34.034 { 00:16:34.034 "name": "BaseBdev4", 00:16:34.034 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:34.034 "is_configured": true, 00:16:34.034 "data_offset": 2048, 00:16:34.034 "data_size": 63488 00:16:34.034 } 00:16:34.034 ] 00:16:34.034 }' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.034 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.034 "name": "raid_bdev1", 00:16:34.034 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:34.034 "strip_size_kb": 64, 00:16:34.034 "state": "online", 00:16:34.034 "raid_level": "raid5f", 00:16:34.034 "superblock": true, 00:16:34.034 "num_base_bdevs": 4, 00:16:34.034 "num_base_bdevs_discovered": 4, 00:16:34.034 "num_base_bdevs_operational": 4, 00:16:34.034 "base_bdevs_list": [ 00:16:34.034 { 00:16:34.034 "name": "spare", 00:16:34.034 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:34.034 "is_configured": true, 00:16:34.034 "data_offset": 2048, 00:16:34.034 "data_size": 63488 00:16:34.034 }, 00:16:34.035 { 00:16:34.035 "name": "BaseBdev2", 00:16:34.035 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:34.035 "is_configured": true, 00:16:34.035 "data_offset": 2048, 00:16:34.035 "data_size": 63488 00:16:34.035 }, 00:16:34.035 { 00:16:34.035 "name": "BaseBdev3", 00:16:34.035 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:34.035 "is_configured": true, 00:16:34.035 "data_offset": 2048, 00:16:34.035 "data_size": 63488 00:16:34.035 }, 00:16:34.035 { 00:16:34.035 "name": "BaseBdev4", 00:16:34.035 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:34.035 "is_configured": true, 00:16:34.035 "data_offset": 2048, 00:16:34.035 "data_size": 63488 00:16:34.035 } 00:16:34.035 ] 00:16:34.035 }' 00:16:34.035 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.035 10:44:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.605 [2024-11-18 10:45:00.300188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.605 [2024-11-18 10:45:00.300258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.605 [2024-11-18 10:45:00.300344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.605 [2024-11-18 10:45:00.300441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.605 [2024-11-18 10:45:00.300502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:34.605 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.606 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:34.866 /dev/nbd0 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.866 1+0 records in 00:16:34.866 1+0 records out 00:16:34.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563184 s, 7.3 MB/s 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.866 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:35.126 /dev/nbd1 00:16:35.126 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:35.126 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:35.126 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:35.126 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:35.126 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:35.126 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.127 1+0 records in 00:16:35.127 1+0 records out 00:16:35.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408533 s, 10.0 MB/s 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:35.127 10:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.387 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.648 [2024-11-18 10:45:01.471664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.648 [2024-11-18 10:45:01.471720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.648 [2024-11-18 10:45:01.471745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:35.648 [2024-11-18 10:45:01.471754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.648 [2024-11-18 10:45:01.473786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.648 [2024-11-18 10:45:01.473823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.648 [2024-11-18 10:45:01.473899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.648 [2024-11-18 10:45:01.473950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.648 [2024-11-18 10:45:01.474074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.648 [2024-11-18 10:45:01.474153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.648 [2024-11-18 10:45:01.474244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.648 spare 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.648 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.909 [2024-11-18 10:45:01.574131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:35.909 [2024-11-18 10:45:01.574159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:35.909 [2024-11-18 10:45:01.574460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:35.909 [2024-11-18 10:45:01.581012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:35.909 [2024-11-18 10:45:01.581031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:35.909 [2024-11-18 10:45:01.581228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.909 "name": "raid_bdev1", 00:16:35.909 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:35.909 "strip_size_kb": 64, 00:16:35.909 "state": "online", 00:16:35.909 "raid_level": "raid5f", 00:16:35.909 "superblock": true, 00:16:35.909 "num_base_bdevs": 4, 00:16:35.909 "num_base_bdevs_discovered": 4, 00:16:35.909 "num_base_bdevs_operational": 4, 00:16:35.909 "base_bdevs_list": [ 00:16:35.909 { 00:16:35.909 "name": "spare", 00:16:35.909 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:35.909 "is_configured": true, 00:16:35.909 "data_offset": 2048, 00:16:35.909 "data_size": 63488 00:16:35.909 }, 00:16:35.909 { 00:16:35.909 "name": "BaseBdev2", 00:16:35.909 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:35.909 "is_configured": true, 00:16:35.909 "data_offset": 2048, 00:16:35.909 "data_size": 63488 00:16:35.909 }, 00:16:35.909 { 00:16:35.909 "name": "BaseBdev3", 00:16:35.909 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:35.909 "is_configured": true, 00:16:35.909 "data_offset": 2048, 00:16:35.909 "data_size": 63488 00:16:35.909 }, 00:16:35.909 { 00:16:35.909 "name": "BaseBdev4", 00:16:35.909 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:35.909 "is_configured": true, 00:16:35.909 "data_offset": 2048, 00:16:35.909 "data_size": 63488 00:16:35.909 } 00:16:35.909 ] 00:16:35.909 }' 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.909 10:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.479 "name": "raid_bdev1", 00:16:36.479 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:36.479 "strip_size_kb": 64, 00:16:36.479 "state": "online", 00:16:36.479 "raid_level": "raid5f", 00:16:36.479 "superblock": true, 00:16:36.479 "num_base_bdevs": 4, 00:16:36.479 "num_base_bdevs_discovered": 4, 00:16:36.479 "num_base_bdevs_operational": 4, 00:16:36.479 "base_bdevs_list": [ 00:16:36.479 { 00:16:36.479 "name": "spare", 00:16:36.479 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:36.479 "is_configured": true, 00:16:36.479 "data_offset": 2048, 00:16:36.479 "data_size": 63488 00:16:36.479 }, 00:16:36.479 { 00:16:36.479 "name": "BaseBdev2", 00:16:36.479 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:36.479 "is_configured": true, 00:16:36.479 "data_offset": 2048, 00:16:36.479 "data_size": 63488 00:16:36.479 }, 00:16:36.479 { 00:16:36.479 "name": "BaseBdev3", 00:16:36.479 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:36.479 "is_configured": true, 00:16:36.479 "data_offset": 2048, 00:16:36.479 "data_size": 63488 00:16:36.479 }, 00:16:36.479 { 00:16:36.479 "name": "BaseBdev4", 00:16:36.479 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:36.479 "is_configured": true, 00:16:36.479 "data_offset": 2048, 00:16:36.479 "data_size": 63488 00:16:36.479 } 00:16:36.479 ] 00:16:36.479 }' 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.479 [2024-11-18 10:45:02.319745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.479 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.480 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.739 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.739 "name": "raid_bdev1", 00:16:36.739 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:36.739 "strip_size_kb": 64, 00:16:36.739 "state": "online", 00:16:36.739 "raid_level": "raid5f", 00:16:36.739 "superblock": true, 00:16:36.739 "num_base_bdevs": 4, 00:16:36.739 "num_base_bdevs_discovered": 3, 00:16:36.739 "num_base_bdevs_operational": 3, 00:16:36.739 "base_bdevs_list": [ 00:16:36.739 { 00:16:36.739 "name": null, 00:16:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.739 "is_configured": false, 00:16:36.739 "data_offset": 0, 00:16:36.739 "data_size": 63488 00:16:36.739 }, 00:16:36.739 { 00:16:36.739 "name": "BaseBdev2", 00:16:36.739 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:36.739 "is_configured": true, 00:16:36.739 "data_offset": 2048, 00:16:36.739 "data_size": 63488 00:16:36.739 }, 00:16:36.739 { 00:16:36.739 "name": "BaseBdev3", 00:16:36.739 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:36.739 "is_configured": true, 00:16:36.739 "data_offset": 2048, 00:16:36.739 "data_size": 63488 00:16:36.739 }, 00:16:36.739 { 00:16:36.739 "name": "BaseBdev4", 00:16:36.739 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:36.739 "is_configured": true, 00:16:36.739 "data_offset": 2048, 00:16:36.739 "data_size": 63488 00:16:36.739 } 00:16:36.739 ] 00:16:36.739 }' 00:16:36.739 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.739 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.999 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.999 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.999 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.999 [2024-11-18 10:45:02.763054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.999 [2024-11-18 10:45:02.763251] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.999 [2024-11-18 10:45:02.763322] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:36.999 [2024-11-18 10:45:02.763377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.999 [2024-11-18 10:45:02.777248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:36.999 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.999 10:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:36.999 [2024-11-18 10:45:02.785467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.938 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.198 "name": "raid_bdev1", 00:16:38.198 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:38.198 "strip_size_kb": 64, 00:16:38.198 "state": "online", 00:16:38.198 "raid_level": "raid5f", 00:16:38.198 "superblock": true, 00:16:38.198 "num_base_bdevs": 4, 00:16:38.198 "num_base_bdevs_discovered": 4, 00:16:38.198 "num_base_bdevs_operational": 4, 00:16:38.198 "process": { 00:16:38.198 "type": "rebuild", 00:16:38.198 "target": "spare", 00:16:38.198 "progress": { 00:16:38.198 "blocks": 19200, 00:16:38.198 "percent": 10 00:16:38.198 } 00:16:38.198 }, 00:16:38.198 "base_bdevs_list": [ 00:16:38.198 { 00:16:38.198 "name": "spare", 00:16:38.198 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:38.198 "is_configured": true, 00:16:38.198 "data_offset": 2048, 00:16:38.198 "data_size": 63488 00:16:38.198 }, 00:16:38.198 { 00:16:38.198 "name": "BaseBdev2", 00:16:38.198 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:38.198 "is_configured": true, 00:16:38.198 "data_offset": 2048, 00:16:38.198 "data_size": 63488 00:16:38.198 }, 00:16:38.198 { 00:16:38.198 "name": "BaseBdev3", 00:16:38.198 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:38.198 "is_configured": true, 00:16:38.198 "data_offset": 2048, 00:16:38.198 "data_size": 63488 00:16:38.198 }, 00:16:38.198 { 00:16:38.198 "name": "BaseBdev4", 00:16:38.198 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:38.198 "is_configured": true, 00:16:38.198 "data_offset": 2048, 00:16:38.198 "data_size": 63488 00:16:38.198 } 00:16:38.198 ] 00:16:38.198 }' 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.198 10:45:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.198 [2024-11-18 10:45:03.944200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.198 [2024-11-18 10:45:03.991237] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.198 [2024-11-18 10:45:03.991323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.198 [2024-11-18 10:45:03.991342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.198 [2024-11-18 10:45:03.991353] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.198 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.198 "name": "raid_bdev1", 00:16:38.198 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:38.198 "strip_size_kb": 64, 00:16:38.198 "state": "online", 00:16:38.198 "raid_level": "raid5f", 00:16:38.198 "superblock": true, 00:16:38.198 "num_base_bdevs": 4, 00:16:38.198 "num_base_bdevs_discovered": 3, 00:16:38.198 "num_base_bdevs_operational": 3, 00:16:38.198 "base_bdevs_list": [ 00:16:38.198 { 00:16:38.198 "name": null, 00:16:38.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.198 "is_configured": false, 00:16:38.198 "data_offset": 0, 00:16:38.198 "data_size": 63488 00:16:38.198 }, 00:16:38.198 { 00:16:38.198 "name": "BaseBdev2", 00:16:38.198 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:38.198 "is_configured": true, 00:16:38.198 "data_offset": 2048, 00:16:38.199 "data_size": 63488 00:16:38.199 }, 00:16:38.199 { 00:16:38.199 "name": "BaseBdev3", 00:16:38.199 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:38.199 "is_configured": true, 00:16:38.199 "data_offset": 2048, 00:16:38.199 "data_size": 63488 00:16:38.199 }, 00:16:38.199 { 00:16:38.199 "name": "BaseBdev4", 00:16:38.199 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:38.199 "is_configured": true, 00:16:38.199 "data_offset": 2048, 00:16:38.199 "data_size": 63488 00:16:38.199 } 00:16:38.199 ] 00:16:38.199 }' 00:16:38.199 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.199 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.767 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.767 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.767 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.767 [2024-11-18 10:45:04.490402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.767 [2024-11-18 10:45:04.490525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.767 [2024-11-18 10:45:04.490575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:38.767 [2024-11-18 10:45:04.490611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.767 [2024-11-18 10:45:04.491159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.767 [2024-11-18 10:45:04.491236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.767 [2024-11-18 10:45:04.491394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.767 [2024-11-18 10:45:04.491447] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:38.767 [2024-11-18 10:45:04.491495] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:38.767 [2024-11-18 10:45:04.491572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.767 [2024-11-18 10:45:04.506187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:38.767 spare 00:16:38.767 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.767 10:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:38.767 [2024-11-18 10:45:04.515193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.706 "name": "raid_bdev1", 00:16:39.706 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:39.706 "strip_size_kb": 64, 00:16:39.706 "state": "online", 00:16:39.706 "raid_level": "raid5f", 00:16:39.706 "superblock": true, 00:16:39.706 "num_base_bdevs": 4, 00:16:39.706 "num_base_bdevs_discovered": 4, 00:16:39.706 "num_base_bdevs_operational": 4, 00:16:39.706 "process": { 00:16:39.706 "type": "rebuild", 00:16:39.706 "target": "spare", 00:16:39.706 "progress": { 00:16:39.706 "blocks": 19200, 00:16:39.706 "percent": 10 00:16:39.706 } 00:16:39.706 }, 00:16:39.706 "base_bdevs_list": [ 00:16:39.706 { 00:16:39.706 "name": "spare", 00:16:39.706 "uuid": "6ddc0a05-bcb8-5bd3-bcb1-7a675a8b081a", 00:16:39.706 "is_configured": true, 00:16:39.706 "data_offset": 2048, 00:16:39.706 "data_size": 63488 00:16:39.706 }, 00:16:39.706 { 00:16:39.706 "name": "BaseBdev2", 00:16:39.706 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:39.706 "is_configured": true, 00:16:39.706 "data_offset": 2048, 00:16:39.706 "data_size": 63488 00:16:39.706 }, 00:16:39.706 { 00:16:39.706 "name": "BaseBdev3", 00:16:39.706 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:39.706 "is_configured": true, 00:16:39.706 "data_offset": 2048, 00:16:39.706 "data_size": 63488 00:16:39.706 }, 00:16:39.706 { 00:16:39.706 "name": "BaseBdev4", 00:16:39.706 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:39.706 "is_configured": true, 00:16:39.706 "data_offset": 2048, 00:16:39.706 "data_size": 63488 00:16:39.706 } 00:16:39.706 ] 00:16:39.706 }' 00:16:39.706 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.966 [2024-11-18 10:45:05.674194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.966 [2024-11-18 10:45:05.722209] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:39.966 [2024-11-18 10:45:05.722325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.966 [2024-11-18 10:45:05.722367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.966 [2024-11-18 10:45:05.722406] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.966 "name": "raid_bdev1", 00:16:39.966 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:39.966 "strip_size_kb": 64, 00:16:39.966 "state": "online", 00:16:39.966 "raid_level": "raid5f", 00:16:39.966 "superblock": true, 00:16:39.966 "num_base_bdevs": 4, 00:16:39.966 "num_base_bdevs_discovered": 3, 00:16:39.966 "num_base_bdevs_operational": 3, 00:16:39.966 "base_bdevs_list": [ 00:16:39.966 { 00:16:39.966 "name": null, 00:16:39.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.966 "is_configured": false, 00:16:39.966 "data_offset": 0, 00:16:39.966 "data_size": 63488 00:16:39.966 }, 00:16:39.966 { 00:16:39.966 "name": "BaseBdev2", 00:16:39.966 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:39.966 "is_configured": true, 00:16:39.966 "data_offset": 2048, 00:16:39.966 "data_size": 63488 00:16:39.966 }, 00:16:39.966 { 00:16:39.966 "name": "BaseBdev3", 00:16:39.966 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:39.966 "is_configured": true, 00:16:39.966 "data_offset": 2048, 00:16:39.966 "data_size": 63488 00:16:39.966 }, 00:16:39.966 { 00:16:39.966 "name": "BaseBdev4", 00:16:39.966 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:39.966 "is_configured": true, 00:16:39.966 "data_offset": 2048, 00:16:39.966 "data_size": 63488 00:16:39.966 } 00:16:39.966 ] 00:16:39.966 }' 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.966 10:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.536 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.536 "name": "raid_bdev1", 00:16:40.536 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:40.536 "strip_size_kb": 64, 00:16:40.536 "state": "online", 00:16:40.536 "raid_level": "raid5f", 00:16:40.536 "superblock": true, 00:16:40.536 "num_base_bdevs": 4, 00:16:40.536 "num_base_bdevs_discovered": 3, 00:16:40.536 "num_base_bdevs_operational": 3, 00:16:40.536 "base_bdevs_list": [ 00:16:40.537 { 00:16:40.537 "name": null, 00:16:40.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.537 "is_configured": false, 00:16:40.537 "data_offset": 0, 00:16:40.537 "data_size": 63488 00:16:40.537 }, 00:16:40.537 { 00:16:40.537 "name": "BaseBdev2", 00:16:40.537 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:40.537 "is_configured": true, 00:16:40.537 "data_offset": 2048, 00:16:40.537 "data_size": 63488 00:16:40.537 }, 00:16:40.537 { 00:16:40.537 "name": "BaseBdev3", 00:16:40.537 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:40.537 "is_configured": true, 00:16:40.537 "data_offset": 2048, 00:16:40.537 "data_size": 63488 00:16:40.537 }, 00:16:40.537 { 00:16:40.537 "name": "BaseBdev4", 00:16:40.537 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:40.537 "is_configured": true, 00:16:40.537 "data_offset": 2048, 00:16:40.537 "data_size": 63488 00:16:40.537 } 00:16:40.537 ] 00:16:40.537 }' 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.537 [2024-11-18 10:45:06.347380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.537 [2024-11-18 10:45:06.347479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.537 [2024-11-18 10:45:06.347510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:40.537 [2024-11-18 10:45:06.347520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.537 [2024-11-18 10:45:06.348095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.537 [2024-11-18 10:45:06.348123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.537 [2024-11-18 10:45:06.348218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:40.537 [2024-11-18 10:45:06.348236] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.537 [2024-11-18 10:45:06.348256] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.537 [2024-11-18 10:45:06.348277] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:40.537 BaseBdev1 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.537 10:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.917 "name": "raid_bdev1", 00:16:41.917 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:41.917 "strip_size_kb": 64, 00:16:41.917 "state": "online", 00:16:41.917 "raid_level": "raid5f", 00:16:41.917 "superblock": true, 00:16:41.917 "num_base_bdevs": 4, 00:16:41.917 "num_base_bdevs_discovered": 3, 00:16:41.917 "num_base_bdevs_operational": 3, 00:16:41.917 "base_bdevs_list": [ 00:16:41.917 { 00:16:41.917 "name": null, 00:16:41.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.917 "is_configured": false, 00:16:41.917 "data_offset": 0, 00:16:41.917 "data_size": 63488 00:16:41.917 }, 00:16:41.917 { 00:16:41.917 "name": "BaseBdev2", 00:16:41.917 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:41.917 "is_configured": true, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 }, 00:16:41.917 { 00:16:41.917 "name": "BaseBdev3", 00:16:41.917 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:41.917 "is_configured": true, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 }, 00:16:41.917 { 00:16:41.917 "name": "BaseBdev4", 00:16:41.917 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:41.917 "is_configured": true, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 } 00:16:41.917 ] 00:16:41.917 }' 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.917 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.177 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.177 "name": "raid_bdev1", 00:16:42.177 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:42.177 "strip_size_kb": 64, 00:16:42.177 "state": "online", 00:16:42.177 "raid_level": "raid5f", 00:16:42.177 "superblock": true, 00:16:42.177 "num_base_bdevs": 4, 00:16:42.177 "num_base_bdevs_discovered": 3, 00:16:42.177 "num_base_bdevs_operational": 3, 00:16:42.177 "base_bdevs_list": [ 00:16:42.177 { 00:16:42.177 "name": null, 00:16:42.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.177 "is_configured": false, 00:16:42.177 "data_offset": 0, 00:16:42.177 "data_size": 63488 00:16:42.177 }, 00:16:42.177 { 00:16:42.177 "name": "BaseBdev2", 00:16:42.177 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:42.177 "is_configured": true, 00:16:42.177 "data_offset": 2048, 00:16:42.177 "data_size": 63488 00:16:42.177 }, 00:16:42.177 { 00:16:42.177 "name": "BaseBdev3", 00:16:42.177 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:42.177 "is_configured": true, 00:16:42.177 "data_offset": 2048, 00:16:42.177 "data_size": 63488 00:16:42.177 }, 00:16:42.177 { 00:16:42.177 "name": "BaseBdev4", 00:16:42.177 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:42.177 "is_configured": true, 00:16:42.177 "data_offset": 2048, 00:16:42.177 "data_size": 63488 00:16:42.177 } 00:16:42.178 ] 00:16:42.178 }' 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.178 [2024-11-18 10:45:07.940865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.178 [2024-11-18 10:45:07.941105] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.178 [2024-11-18 10:45:07.941166] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:42.178 request: 00:16:42.178 { 00:16:42.178 "base_bdev": "BaseBdev1", 00:16:42.178 "raid_bdev": "raid_bdev1", 00:16:42.178 "method": "bdev_raid_add_base_bdev", 00:16:42.178 "req_id": 1 00:16:42.178 } 00:16:42.178 Got JSON-RPC error response 00:16:42.178 response: 00:16:42.178 { 00:16:42.178 "code": -22, 00:16:42.178 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:42.178 } 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.178 10:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.118 10:45:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.388 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.388 "name": "raid_bdev1", 00:16:43.388 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:43.388 "strip_size_kb": 64, 00:16:43.388 "state": "online", 00:16:43.388 "raid_level": "raid5f", 00:16:43.388 "superblock": true, 00:16:43.388 "num_base_bdevs": 4, 00:16:43.388 "num_base_bdevs_discovered": 3, 00:16:43.388 "num_base_bdevs_operational": 3, 00:16:43.388 "base_bdevs_list": [ 00:16:43.388 { 00:16:43.388 "name": null, 00:16:43.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.388 "is_configured": false, 00:16:43.388 "data_offset": 0, 00:16:43.388 "data_size": 63488 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "name": "BaseBdev2", 00:16:43.388 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:43.388 "is_configured": true, 00:16:43.388 "data_offset": 2048, 00:16:43.388 "data_size": 63488 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "name": "BaseBdev3", 00:16:43.388 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:43.388 "is_configured": true, 00:16:43.388 "data_offset": 2048, 00:16:43.388 "data_size": 63488 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "name": "BaseBdev4", 00:16:43.388 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:43.388 "is_configured": true, 00:16:43.388 "data_offset": 2048, 00:16:43.388 "data_size": 63488 00:16:43.388 } 00:16:43.388 ] 00:16:43.388 }' 00:16:43.388 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.388 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.647 "name": "raid_bdev1", 00:16:43.647 "uuid": "1bb9920c-449e-443f-8ab6-605bdde9232a", 00:16:43.647 "strip_size_kb": 64, 00:16:43.647 "state": "online", 00:16:43.647 "raid_level": "raid5f", 00:16:43.647 "superblock": true, 00:16:43.647 "num_base_bdevs": 4, 00:16:43.647 "num_base_bdevs_discovered": 3, 00:16:43.647 "num_base_bdevs_operational": 3, 00:16:43.647 "base_bdevs_list": [ 00:16:43.647 { 00:16:43.647 "name": null, 00:16:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.647 "is_configured": false, 00:16:43.647 "data_offset": 0, 00:16:43.647 "data_size": 63488 00:16:43.647 }, 00:16:43.647 { 00:16:43.647 "name": "BaseBdev2", 00:16:43.647 "uuid": "e4ee1f69-405a-5c31-9287-ede2b5aedbcd", 00:16:43.647 "is_configured": true, 00:16:43.647 "data_offset": 2048, 00:16:43.647 "data_size": 63488 00:16:43.647 }, 00:16:43.647 { 00:16:43.647 "name": "BaseBdev3", 00:16:43.647 "uuid": "9a712135-a0e0-53c6-ba1f-dc0b0387dd7a", 00:16:43.647 "is_configured": true, 00:16:43.647 "data_offset": 2048, 00:16:43.647 "data_size": 63488 00:16:43.647 }, 00:16:43.647 { 00:16:43.647 "name": "BaseBdev4", 00:16:43.647 "uuid": "41779460-389d-586a-beb6-ac6909f538ca", 00:16:43.647 "is_configured": true, 00:16:43.647 "data_offset": 2048, 00:16:43.647 "data_size": 63488 00:16:43.647 } 00:16:43.647 ] 00:16:43.647 }' 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84913 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84913 ']' 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84913 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.647 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84913 00:16:43.907 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.907 killing process with pid 84913 00:16:43.907 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.907 00:16:43.907 Latency(us) 00:16:43.907 [2024-11-18T10:45:09.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.907 [2024-11-18T10:45:09.792Z] =================================================================================================================== 00:16:43.907 [2024-11-18T10:45:09.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.907 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.907 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84913' 00:16:43.907 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84913 00:16:43.907 [2024-11-18 10:45:09.555024] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.907 [2024-11-18 10:45:09.555163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.907 10:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84913 00:16:43.907 [2024-11-18 10:45:09.555262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.907 [2024-11-18 10:45:09.555276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:44.476 [2024-11-18 10:45:10.061940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.417 10:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:45.417 00:16:45.417 real 0m27.167s 00:16:45.417 user 0m34.041s 00:16:45.418 sys 0m3.204s 00:16:45.418 10:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.418 10:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.418 ************************************ 00:16:45.418 END TEST raid5f_rebuild_test_sb 00:16:45.418 ************************************ 00:16:45.418 10:45:11 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:45.418 10:45:11 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:45.418 10:45:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:45.418 10:45:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.418 10:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.418 ************************************ 00:16:45.418 START TEST raid_state_function_test_sb_4k 00:16:45.418 ************************************ 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85724 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:45.418 Process raid pid: 85724 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85724' 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85724 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85724 ']' 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.418 10:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.677 [2024-11-18 10:45:11.381715] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:45.677 [2024-11-18 10:45:11.381836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.936 [2024-11-18 10:45:11.561215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.936 [2024-11-18 10:45:11.696948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.195 [2024-11-18 10:45:11.929862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.195 [2024-11-18 10:45:11.929899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.454 [2024-11-18 10:45:12.204813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.454 [2024-11-18 10:45:12.204883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.454 [2024-11-18 10:45:12.204895] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.454 [2024-11-18 10:45:12.204905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.454 "name": "Existed_Raid", 00:16:46.454 "uuid": "847e2578-6409-4f0c-8ef5-0d1d0d59a0fe", 00:16:46.454 "strip_size_kb": 0, 00:16:46.454 "state": "configuring", 00:16:46.454 "raid_level": "raid1", 00:16:46.454 "superblock": true, 00:16:46.454 "num_base_bdevs": 2, 00:16:46.454 "num_base_bdevs_discovered": 0, 00:16:46.454 "num_base_bdevs_operational": 2, 00:16:46.454 "base_bdevs_list": [ 00:16:46.454 { 00:16:46.454 "name": "BaseBdev1", 00:16:46.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.454 "is_configured": false, 00:16:46.454 "data_offset": 0, 00:16:46.454 "data_size": 0 00:16:46.454 }, 00:16:46.454 { 00:16:46.454 "name": "BaseBdev2", 00:16:46.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.454 "is_configured": false, 00:16:46.454 "data_offset": 0, 00:16:46.454 "data_size": 0 00:16:46.454 } 00:16:46.454 ] 00:16:46.454 }' 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.454 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 [2024-11-18 10:45:12.671975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.022 [2024-11-18 10:45:12.672079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 [2024-11-18 10:45:12.683958] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.022 [2024-11-18 10:45:12.684054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.022 [2024-11-18 10:45:12.684082] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.022 [2024-11-18 10:45:12.684108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 [2024-11-18 10:45:12.736807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.022 BaseBdev1 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.022 [ 00:16:47.022 { 00:16:47.022 "name": "BaseBdev1", 00:16:47.022 "aliases": [ 00:16:47.022 "37a750e9-3b19-42ff-9dd9-85c282bbb3d7" 00:16:47.022 ], 00:16:47.022 "product_name": "Malloc disk", 00:16:47.022 "block_size": 4096, 00:16:47.022 "num_blocks": 8192, 00:16:47.022 "uuid": "37a750e9-3b19-42ff-9dd9-85c282bbb3d7", 00:16:47.022 "assigned_rate_limits": { 00:16:47.022 "rw_ios_per_sec": 0, 00:16:47.022 "rw_mbytes_per_sec": 0, 00:16:47.022 "r_mbytes_per_sec": 0, 00:16:47.022 "w_mbytes_per_sec": 0 00:16:47.022 }, 00:16:47.022 "claimed": true, 00:16:47.022 "claim_type": "exclusive_write", 00:16:47.022 "zoned": false, 00:16:47.022 "supported_io_types": { 00:16:47.022 "read": true, 00:16:47.022 "write": true, 00:16:47.022 "unmap": true, 00:16:47.022 "flush": true, 00:16:47.022 "reset": true, 00:16:47.022 "nvme_admin": false, 00:16:47.022 "nvme_io": false, 00:16:47.022 "nvme_io_md": false, 00:16:47.022 "write_zeroes": true, 00:16:47.022 "zcopy": true, 00:16:47.022 "get_zone_info": false, 00:16:47.022 "zone_management": false, 00:16:47.022 "zone_append": false, 00:16:47.022 "compare": false, 00:16:47.022 "compare_and_write": false, 00:16:47.022 "abort": true, 00:16:47.022 "seek_hole": false, 00:16:47.022 "seek_data": false, 00:16:47.022 "copy": true, 00:16:47.022 "nvme_iov_md": false 00:16:47.022 }, 00:16:47.022 "memory_domains": [ 00:16:47.022 { 00:16:47.022 "dma_device_id": "system", 00:16:47.022 "dma_device_type": 1 00:16:47.022 }, 00:16:47.022 { 00:16:47.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.022 "dma_device_type": 2 00:16:47.022 } 00:16:47.022 ], 00:16:47.022 "driver_specific": {} 00:16:47.022 } 00:16:47.022 ] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.022 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.023 "name": "Existed_Raid", 00:16:47.023 "uuid": "58a947aa-5f1c-437b-afea-06bc06455aba", 00:16:47.023 "strip_size_kb": 0, 00:16:47.023 "state": "configuring", 00:16:47.023 "raid_level": "raid1", 00:16:47.023 "superblock": true, 00:16:47.023 "num_base_bdevs": 2, 00:16:47.023 "num_base_bdevs_discovered": 1, 00:16:47.023 "num_base_bdevs_operational": 2, 00:16:47.023 "base_bdevs_list": [ 00:16:47.023 { 00:16:47.023 "name": "BaseBdev1", 00:16:47.023 "uuid": "37a750e9-3b19-42ff-9dd9-85c282bbb3d7", 00:16:47.023 "is_configured": true, 00:16:47.023 "data_offset": 256, 00:16:47.023 "data_size": 7936 00:16:47.023 }, 00:16:47.023 { 00:16:47.023 "name": "BaseBdev2", 00:16:47.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.023 "is_configured": false, 00:16:47.023 "data_offset": 0, 00:16:47.023 "data_size": 0 00:16:47.023 } 00:16:47.023 ] 00:16:47.023 }' 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.023 10:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 [2024-11-18 10:45:13.255922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.601 [2024-11-18 10:45:13.256019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 [2024-11-18 10:45:13.267956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.601 [2024-11-18 10:45:13.270055] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.601 [2024-11-18 10:45:13.270130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.601 "name": "Existed_Raid", 00:16:47.601 "uuid": "efe6dd18-23b6-4a1d-9141-102c90ed27c3", 00:16:47.601 "strip_size_kb": 0, 00:16:47.601 "state": "configuring", 00:16:47.601 "raid_level": "raid1", 00:16:47.601 "superblock": true, 00:16:47.601 "num_base_bdevs": 2, 00:16:47.601 "num_base_bdevs_discovered": 1, 00:16:47.601 "num_base_bdevs_operational": 2, 00:16:47.601 "base_bdevs_list": [ 00:16:47.601 { 00:16:47.601 "name": "BaseBdev1", 00:16:47.601 "uuid": "37a750e9-3b19-42ff-9dd9-85c282bbb3d7", 00:16:47.601 "is_configured": true, 00:16:47.601 "data_offset": 256, 00:16:47.601 "data_size": 7936 00:16:47.601 }, 00:16:47.601 { 00:16:47.601 "name": "BaseBdev2", 00:16:47.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.601 "is_configured": false, 00:16:47.601 "data_offset": 0, 00:16:47.601 "data_size": 0 00:16:47.601 } 00:16:47.601 ] 00:16:47.601 }' 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.601 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.877 [2024-11-18 10:45:13.750411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.877 [2024-11-18 10:45:13.750730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.877 [2024-11-18 10:45:13.750752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:47.877 [2024-11-18 10:45:13.751058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:47.877 [2024-11-18 10:45:13.751253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.877 [2024-11-18 10:45:13.751269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.877 BaseBdev2 00:16:47.877 [2024-11-18 10:45:13.751447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.877 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.878 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.878 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.878 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.137 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.137 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.137 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.137 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.137 [ 00:16:48.137 { 00:16:48.137 "name": "BaseBdev2", 00:16:48.137 "aliases": [ 00:16:48.137 "2d176d4f-f94e-458e-9da8-71ffcf71d5b9" 00:16:48.137 ], 00:16:48.137 "product_name": "Malloc disk", 00:16:48.137 "block_size": 4096, 00:16:48.137 "num_blocks": 8192, 00:16:48.137 "uuid": "2d176d4f-f94e-458e-9da8-71ffcf71d5b9", 00:16:48.137 "assigned_rate_limits": { 00:16:48.137 "rw_ios_per_sec": 0, 00:16:48.137 "rw_mbytes_per_sec": 0, 00:16:48.137 "r_mbytes_per_sec": 0, 00:16:48.137 "w_mbytes_per_sec": 0 00:16:48.137 }, 00:16:48.137 "claimed": true, 00:16:48.137 "claim_type": "exclusive_write", 00:16:48.137 "zoned": false, 00:16:48.137 "supported_io_types": { 00:16:48.137 "read": true, 00:16:48.137 "write": true, 00:16:48.137 "unmap": true, 00:16:48.137 "flush": true, 00:16:48.137 "reset": true, 00:16:48.137 "nvme_admin": false, 00:16:48.137 "nvme_io": false, 00:16:48.137 "nvme_io_md": false, 00:16:48.137 "write_zeroes": true, 00:16:48.137 "zcopy": true, 00:16:48.137 "get_zone_info": false, 00:16:48.137 "zone_management": false, 00:16:48.137 "zone_append": false, 00:16:48.137 "compare": false, 00:16:48.137 "compare_and_write": false, 00:16:48.137 "abort": true, 00:16:48.137 "seek_hole": false, 00:16:48.137 "seek_data": false, 00:16:48.138 "copy": true, 00:16:48.138 "nvme_iov_md": false 00:16:48.138 }, 00:16:48.138 "memory_domains": [ 00:16:48.138 { 00:16:48.138 "dma_device_id": "system", 00:16:48.138 "dma_device_type": 1 00:16:48.138 }, 00:16:48.138 { 00:16:48.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.138 "dma_device_type": 2 00:16:48.138 } 00:16:48.138 ], 00:16:48.138 "driver_specific": {} 00:16:48.138 } 00:16:48.138 ] 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.138 "name": "Existed_Raid", 00:16:48.138 "uuid": "efe6dd18-23b6-4a1d-9141-102c90ed27c3", 00:16:48.138 "strip_size_kb": 0, 00:16:48.138 "state": "online", 00:16:48.138 "raid_level": "raid1", 00:16:48.138 "superblock": true, 00:16:48.138 "num_base_bdevs": 2, 00:16:48.138 "num_base_bdevs_discovered": 2, 00:16:48.138 "num_base_bdevs_operational": 2, 00:16:48.138 "base_bdevs_list": [ 00:16:48.138 { 00:16:48.138 "name": "BaseBdev1", 00:16:48.138 "uuid": "37a750e9-3b19-42ff-9dd9-85c282bbb3d7", 00:16:48.138 "is_configured": true, 00:16:48.138 "data_offset": 256, 00:16:48.138 "data_size": 7936 00:16:48.138 }, 00:16:48.138 { 00:16:48.138 "name": "BaseBdev2", 00:16:48.138 "uuid": "2d176d4f-f94e-458e-9da8-71ffcf71d5b9", 00:16:48.138 "is_configured": true, 00:16:48.138 "data_offset": 256, 00:16:48.138 "data_size": 7936 00:16:48.138 } 00:16:48.138 ] 00:16:48.138 }' 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.138 10:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.397 [2024-11-18 10:45:14.237865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.397 "name": "Existed_Raid", 00:16:48.397 "aliases": [ 00:16:48.397 "efe6dd18-23b6-4a1d-9141-102c90ed27c3" 00:16:48.397 ], 00:16:48.397 "product_name": "Raid Volume", 00:16:48.397 "block_size": 4096, 00:16:48.397 "num_blocks": 7936, 00:16:48.397 "uuid": "efe6dd18-23b6-4a1d-9141-102c90ed27c3", 00:16:48.397 "assigned_rate_limits": { 00:16:48.397 "rw_ios_per_sec": 0, 00:16:48.397 "rw_mbytes_per_sec": 0, 00:16:48.397 "r_mbytes_per_sec": 0, 00:16:48.397 "w_mbytes_per_sec": 0 00:16:48.397 }, 00:16:48.397 "claimed": false, 00:16:48.397 "zoned": false, 00:16:48.397 "supported_io_types": { 00:16:48.397 "read": true, 00:16:48.397 "write": true, 00:16:48.397 "unmap": false, 00:16:48.397 "flush": false, 00:16:48.397 "reset": true, 00:16:48.397 "nvme_admin": false, 00:16:48.397 "nvme_io": false, 00:16:48.397 "nvme_io_md": false, 00:16:48.397 "write_zeroes": true, 00:16:48.397 "zcopy": false, 00:16:48.397 "get_zone_info": false, 00:16:48.397 "zone_management": false, 00:16:48.397 "zone_append": false, 00:16:48.397 "compare": false, 00:16:48.397 "compare_and_write": false, 00:16:48.397 "abort": false, 00:16:48.397 "seek_hole": false, 00:16:48.397 "seek_data": false, 00:16:48.397 "copy": false, 00:16:48.397 "nvme_iov_md": false 00:16:48.397 }, 00:16:48.397 "memory_domains": [ 00:16:48.397 { 00:16:48.397 "dma_device_id": "system", 00:16:48.397 "dma_device_type": 1 00:16:48.397 }, 00:16:48.397 { 00:16:48.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.397 "dma_device_type": 2 00:16:48.397 }, 00:16:48.397 { 00:16:48.397 "dma_device_id": "system", 00:16:48.397 "dma_device_type": 1 00:16:48.397 }, 00:16:48.397 { 00:16:48.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.397 "dma_device_type": 2 00:16:48.397 } 00:16:48.397 ], 00:16:48.397 "driver_specific": { 00:16:48.397 "raid": { 00:16:48.397 "uuid": "efe6dd18-23b6-4a1d-9141-102c90ed27c3", 00:16:48.397 "strip_size_kb": 0, 00:16:48.397 "state": "online", 00:16:48.397 "raid_level": "raid1", 00:16:48.397 "superblock": true, 00:16:48.397 "num_base_bdevs": 2, 00:16:48.397 "num_base_bdevs_discovered": 2, 00:16:48.397 "num_base_bdevs_operational": 2, 00:16:48.397 "base_bdevs_list": [ 00:16:48.397 { 00:16:48.397 "name": "BaseBdev1", 00:16:48.397 "uuid": "37a750e9-3b19-42ff-9dd9-85c282bbb3d7", 00:16:48.397 "is_configured": true, 00:16:48.397 "data_offset": 256, 00:16:48.397 "data_size": 7936 00:16:48.397 }, 00:16:48.397 { 00:16:48.397 "name": "BaseBdev2", 00:16:48.397 "uuid": "2d176d4f-f94e-458e-9da8-71ffcf71d5b9", 00:16:48.397 "is_configured": true, 00:16:48.397 "data_offset": 256, 00:16:48.397 "data_size": 7936 00:16:48.397 } 00:16:48.397 ] 00:16:48.397 } 00:16:48.397 } 00:16:48.397 }' 00:16:48.397 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:48.657 BaseBdev2' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.657 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.657 [2024-11-18 10:45:14.465280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.917 "name": "Existed_Raid", 00:16:48.917 "uuid": "efe6dd18-23b6-4a1d-9141-102c90ed27c3", 00:16:48.917 "strip_size_kb": 0, 00:16:48.917 "state": "online", 00:16:48.917 "raid_level": "raid1", 00:16:48.917 "superblock": true, 00:16:48.917 "num_base_bdevs": 2, 00:16:48.917 "num_base_bdevs_discovered": 1, 00:16:48.917 "num_base_bdevs_operational": 1, 00:16:48.917 "base_bdevs_list": [ 00:16:48.917 { 00:16:48.917 "name": null, 00:16:48.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.917 "is_configured": false, 00:16:48.917 "data_offset": 0, 00:16:48.917 "data_size": 7936 00:16:48.917 }, 00:16:48.917 { 00:16:48.917 "name": "BaseBdev2", 00:16:48.917 "uuid": "2d176d4f-f94e-458e-9da8-71ffcf71d5b9", 00:16:48.917 "is_configured": true, 00:16:48.917 "data_offset": 256, 00:16:48.917 "data_size": 7936 00:16:48.917 } 00:16:48.917 ] 00:16:48.917 }' 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.917 10:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.177 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.437 [2024-11-18 10:45:15.082875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.437 [2024-11-18 10:45:15.082992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.437 [2024-11-18 10:45:15.181417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.437 [2024-11-18 10:45:15.181557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.437 [2024-11-18 10:45:15.181602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85724 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85724 ']' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85724 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85724 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.437 killing process with pid 85724 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85724' 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85724 00:16:49.437 [2024-11-18 10:45:15.280599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.437 10:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85724 00:16:49.437 [2024-11-18 10:45:15.298290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.818 ************************************ 00:16:50.818 END TEST raid_state_function_test_sb_4k 00:16:50.818 ************************************ 00:16:50.818 10:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:50.818 00:16:50.818 real 0m5.180s 00:16:50.818 user 0m7.331s 00:16:50.818 sys 0m1.017s 00:16:50.818 10:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.818 10:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.818 10:45:16 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:50.818 10:45:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:50.818 10:45:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.818 10:45:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.818 ************************************ 00:16:50.818 START TEST raid_superblock_test_4k 00:16:50.818 ************************************ 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85977 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85977 00:16:50.818 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85977 ']' 00:16:50.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.819 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.819 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.819 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.819 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.819 10:45:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.819 [2024-11-18 10:45:16.622589] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:50.819 [2024-11-18 10:45:16.622768] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85977 ] 00:16:51.078 [2024-11-18 10:45:16.795106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.078 [2024-11-18 10:45:16.924838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.338 [2024-11-18 10:45:17.154688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.338 [2024-11-18 10:45:17.154723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.597 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 malloc1 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 [2024-11-18 10:45:17.493277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.857 [2024-11-18 10:45:17.493445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.857 [2024-11-18 10:45:17.493491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:51.857 [2024-11-18 10:45:17.493523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.857 [2024-11-18 10:45:17.495983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.857 [2024-11-18 10:45:17.496071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.857 pt1 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 malloc2 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 [2024-11-18 10:45:17.557489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.857 [2024-11-18 10:45:17.557609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.857 [2024-11-18 10:45:17.557647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:51.857 [2024-11-18 10:45:17.557678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.857 [2024-11-18 10:45:17.560088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.857 [2024-11-18 10:45:17.560157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.857 pt2 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 [2024-11-18 10:45:17.569534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.857 [2024-11-18 10:45:17.571570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.857 [2024-11-18 10:45:17.571778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:51.857 [2024-11-18 10:45:17.571829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:51.857 [2024-11-18 10:45:17.572077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:51.857 [2024-11-18 10:45:17.572294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:51.857 [2024-11-18 10:45:17.572344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:51.857 [2024-11-18 10:45:17.572528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.858 "name": "raid_bdev1", 00:16:51.858 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:51.858 "strip_size_kb": 0, 00:16:51.858 "state": "online", 00:16:51.858 "raid_level": "raid1", 00:16:51.858 "superblock": true, 00:16:51.858 "num_base_bdevs": 2, 00:16:51.858 "num_base_bdevs_discovered": 2, 00:16:51.858 "num_base_bdevs_operational": 2, 00:16:51.858 "base_bdevs_list": [ 00:16:51.858 { 00:16:51.858 "name": "pt1", 00:16:51.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.858 "is_configured": true, 00:16:51.858 "data_offset": 256, 00:16:51.858 "data_size": 7936 00:16:51.858 }, 00:16:51.858 { 00:16:51.858 "name": "pt2", 00:16:51.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.858 "is_configured": true, 00:16:51.858 "data_offset": 256, 00:16:51.858 "data_size": 7936 00:16:51.858 } 00:16:51.858 ] 00:16:51.858 }' 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.858 10:45:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.425 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:52.425 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:52.425 10:45:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.425 [2024-11-18 10:45:18.012989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.425 "name": "raid_bdev1", 00:16:52.425 "aliases": [ 00:16:52.425 "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5" 00:16:52.425 ], 00:16:52.425 "product_name": "Raid Volume", 00:16:52.425 "block_size": 4096, 00:16:52.425 "num_blocks": 7936, 00:16:52.425 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:52.425 "assigned_rate_limits": { 00:16:52.425 "rw_ios_per_sec": 0, 00:16:52.425 "rw_mbytes_per_sec": 0, 00:16:52.425 "r_mbytes_per_sec": 0, 00:16:52.425 "w_mbytes_per_sec": 0 00:16:52.425 }, 00:16:52.425 "claimed": false, 00:16:52.425 "zoned": false, 00:16:52.425 "supported_io_types": { 00:16:52.425 "read": true, 00:16:52.425 "write": true, 00:16:52.425 "unmap": false, 00:16:52.425 "flush": false, 00:16:52.425 "reset": true, 00:16:52.425 "nvme_admin": false, 00:16:52.425 "nvme_io": false, 00:16:52.425 "nvme_io_md": false, 00:16:52.425 "write_zeroes": true, 00:16:52.425 "zcopy": false, 00:16:52.425 "get_zone_info": false, 00:16:52.425 "zone_management": false, 00:16:52.425 "zone_append": false, 00:16:52.425 "compare": false, 00:16:52.425 "compare_and_write": false, 00:16:52.425 "abort": false, 00:16:52.425 "seek_hole": false, 00:16:52.425 "seek_data": false, 00:16:52.425 "copy": false, 00:16:52.425 "nvme_iov_md": false 00:16:52.425 }, 00:16:52.425 "memory_domains": [ 00:16:52.425 { 00:16:52.425 "dma_device_id": "system", 00:16:52.425 "dma_device_type": 1 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.425 "dma_device_type": 2 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "dma_device_id": "system", 00:16:52.425 "dma_device_type": 1 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.425 "dma_device_type": 2 00:16:52.425 } 00:16:52.425 ], 00:16:52.425 "driver_specific": { 00:16:52.425 "raid": { 00:16:52.425 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:52.425 "strip_size_kb": 0, 00:16:52.425 "state": "online", 00:16:52.425 "raid_level": "raid1", 00:16:52.425 "superblock": true, 00:16:52.425 "num_base_bdevs": 2, 00:16:52.425 "num_base_bdevs_discovered": 2, 00:16:52.425 "num_base_bdevs_operational": 2, 00:16:52.425 "base_bdevs_list": [ 00:16:52.425 { 00:16:52.425 "name": "pt1", 00:16:52.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.425 "is_configured": true, 00:16:52.425 "data_offset": 256, 00:16:52.425 "data_size": 7936 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "name": "pt2", 00:16:52.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.425 "is_configured": true, 00:16:52.425 "data_offset": 256, 00:16:52.425 "data_size": 7936 00:16:52.425 } 00:16:52.425 ] 00:16:52.425 } 00:16:52.425 } 00:16:52.425 }' 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:52.425 pt2' 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:52.425 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:52.426 [2024-11-18 10:45:18.248555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5 ']' 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.426 [2024-11-18 10:45:18.300212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.426 [2024-11-18 10:45:18.300233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.426 [2024-11-18 10:45:18.300315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.426 [2024-11-18 10:45:18.300371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.426 [2024-11-18 10:45:18.300384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:52.426 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.685 [2024-11-18 10:45:18.443990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:52.685 [2024-11-18 10:45:18.446127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:52.685 [2024-11-18 10:45:18.446210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:52.685 [2024-11-18 10:45:18.446278] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:52.685 [2024-11-18 10:45:18.446292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.685 [2024-11-18 10:45:18.446303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:52.685 request: 00:16:52.685 { 00:16:52.685 "name": "raid_bdev1", 00:16:52.685 "raid_level": "raid1", 00:16:52.685 "base_bdevs": [ 00:16:52.685 "malloc1", 00:16:52.685 "malloc2" 00:16:52.685 ], 00:16:52.685 "superblock": false, 00:16:52.685 "method": "bdev_raid_create", 00:16:52.685 "req_id": 1 00:16:52.685 } 00:16:52.685 Got JSON-RPC error response 00:16:52.685 response: 00:16:52.685 { 00:16:52.685 "code": -17, 00:16:52.685 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:52.685 } 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:52.685 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.686 [2024-11-18 10:45:18.499872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:52.686 [2024-11-18 10:45:18.499973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.686 [2024-11-18 10:45:18.500006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:52.686 [2024-11-18 10:45:18.500036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.686 [2024-11-18 10:45:18.502416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.686 [2024-11-18 10:45:18.502485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:52.686 [2024-11-18 10:45:18.502574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:52.686 [2024-11-18 10:45:18.502678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:52.686 pt1 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.686 "name": "raid_bdev1", 00:16:52.686 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:52.686 "strip_size_kb": 0, 00:16:52.686 "state": "configuring", 00:16:52.686 "raid_level": "raid1", 00:16:52.686 "superblock": true, 00:16:52.686 "num_base_bdevs": 2, 00:16:52.686 "num_base_bdevs_discovered": 1, 00:16:52.686 "num_base_bdevs_operational": 2, 00:16:52.686 "base_bdevs_list": [ 00:16:52.686 { 00:16:52.686 "name": "pt1", 00:16:52.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.686 "is_configured": true, 00:16:52.686 "data_offset": 256, 00:16:52.686 "data_size": 7936 00:16:52.686 }, 00:16:52.686 { 00:16:52.686 "name": null, 00:16:52.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.686 "is_configured": false, 00:16:52.686 "data_offset": 256, 00:16:52.686 "data_size": 7936 00:16:52.686 } 00:16:52.686 ] 00:16:52.686 }' 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.686 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.255 [2024-11-18 10:45:18.971106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.255 [2024-11-18 10:45:18.971234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.255 [2024-11-18 10:45:18.971276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:53.255 [2024-11-18 10:45:18.971291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.255 [2024-11-18 10:45:18.971755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.255 [2024-11-18 10:45:18.971776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.255 [2024-11-18 10:45:18.971849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:53.255 [2024-11-18 10:45:18.971873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.255 [2024-11-18 10:45:18.971999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:53.255 [2024-11-18 10:45:18.972010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:53.255 [2024-11-18 10:45:18.972271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:53.255 [2024-11-18 10:45:18.972432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:53.255 [2024-11-18 10:45:18.972447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:53.255 [2024-11-18 10:45:18.972605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.255 pt2 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.255 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.256 10:45:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.256 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.256 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.256 "name": "raid_bdev1", 00:16:53.256 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:53.256 "strip_size_kb": 0, 00:16:53.256 "state": "online", 00:16:53.256 "raid_level": "raid1", 00:16:53.256 "superblock": true, 00:16:53.256 "num_base_bdevs": 2, 00:16:53.256 "num_base_bdevs_discovered": 2, 00:16:53.256 "num_base_bdevs_operational": 2, 00:16:53.256 "base_bdevs_list": [ 00:16:53.256 { 00:16:53.256 "name": "pt1", 00:16:53.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.256 "is_configured": true, 00:16:53.256 "data_offset": 256, 00:16:53.256 "data_size": 7936 00:16:53.256 }, 00:16:53.256 { 00:16:53.256 "name": "pt2", 00:16:53.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.256 "is_configured": true, 00:16:53.256 "data_offset": 256, 00:16:53.256 "data_size": 7936 00:16:53.256 } 00:16:53.256 ] 00:16:53.256 }' 00:16:53.256 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.256 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 [2024-11-18 10:45:19.446550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.838 "name": "raid_bdev1", 00:16:53.838 "aliases": [ 00:16:53.838 "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5" 00:16:53.838 ], 00:16:53.838 "product_name": "Raid Volume", 00:16:53.838 "block_size": 4096, 00:16:53.838 "num_blocks": 7936, 00:16:53.838 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:53.838 "assigned_rate_limits": { 00:16:53.838 "rw_ios_per_sec": 0, 00:16:53.838 "rw_mbytes_per_sec": 0, 00:16:53.838 "r_mbytes_per_sec": 0, 00:16:53.838 "w_mbytes_per_sec": 0 00:16:53.838 }, 00:16:53.838 "claimed": false, 00:16:53.838 "zoned": false, 00:16:53.838 "supported_io_types": { 00:16:53.838 "read": true, 00:16:53.838 "write": true, 00:16:53.838 "unmap": false, 00:16:53.838 "flush": false, 00:16:53.838 "reset": true, 00:16:53.838 "nvme_admin": false, 00:16:53.838 "nvme_io": false, 00:16:53.838 "nvme_io_md": false, 00:16:53.838 "write_zeroes": true, 00:16:53.838 "zcopy": false, 00:16:53.838 "get_zone_info": false, 00:16:53.838 "zone_management": false, 00:16:53.838 "zone_append": false, 00:16:53.838 "compare": false, 00:16:53.838 "compare_and_write": false, 00:16:53.838 "abort": false, 00:16:53.838 "seek_hole": false, 00:16:53.838 "seek_data": false, 00:16:53.838 "copy": false, 00:16:53.838 "nvme_iov_md": false 00:16:53.838 }, 00:16:53.838 "memory_domains": [ 00:16:53.838 { 00:16:53.838 "dma_device_id": "system", 00:16:53.838 "dma_device_type": 1 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.838 "dma_device_type": 2 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "dma_device_id": "system", 00:16:53.838 "dma_device_type": 1 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.838 "dma_device_type": 2 00:16:53.838 } 00:16:53.838 ], 00:16:53.838 "driver_specific": { 00:16:53.838 "raid": { 00:16:53.838 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:53.838 "strip_size_kb": 0, 00:16:53.838 "state": "online", 00:16:53.838 "raid_level": "raid1", 00:16:53.838 "superblock": true, 00:16:53.838 "num_base_bdevs": 2, 00:16:53.838 "num_base_bdevs_discovered": 2, 00:16:53.838 "num_base_bdevs_operational": 2, 00:16:53.838 "base_bdevs_list": [ 00:16:53.838 { 00:16:53.838 "name": "pt1", 00:16:53.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 256, 00:16:53.838 "data_size": 7936 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "pt2", 00:16:53.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 256, 00:16:53.838 "data_size": 7936 00:16:53.838 } 00:16:53.838 ] 00:16:53.838 } 00:16:53.838 } 00:16:53.838 }' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:53.838 pt2' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:53.838 [2024-11-18 10:45:19.670096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5 '!=' 0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5 ']' 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 [2024-11-18 10:45:19.713844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.838 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.098 "name": "raid_bdev1", 00:16:54.098 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:54.098 "strip_size_kb": 0, 00:16:54.098 "state": "online", 00:16:54.098 "raid_level": "raid1", 00:16:54.098 "superblock": true, 00:16:54.098 "num_base_bdevs": 2, 00:16:54.098 "num_base_bdevs_discovered": 1, 00:16:54.098 "num_base_bdevs_operational": 1, 00:16:54.098 "base_bdevs_list": [ 00:16:54.098 { 00:16:54.098 "name": null, 00:16:54.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.098 "is_configured": false, 00:16:54.098 "data_offset": 0, 00:16:54.098 "data_size": 7936 00:16:54.098 }, 00:16:54.098 { 00:16:54.098 "name": "pt2", 00:16:54.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.098 "is_configured": true, 00:16:54.098 "data_offset": 256, 00:16:54.098 "data_size": 7936 00:16:54.098 } 00:16:54.098 ] 00:16:54.098 }' 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.098 10:45:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.357 [2024-11-18 10:45:20.133110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.357 [2024-11-18 10:45:20.133187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.357 [2024-11-18 10:45:20.133283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.357 [2024-11-18 10:45:20.133331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.357 [2024-11-18 10:45:20.133344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.357 [2024-11-18 10:45:20.208958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.357 [2024-11-18 10:45:20.209015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.357 [2024-11-18 10:45:20.209033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:54.357 [2024-11-18 10:45:20.209043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.357 [2024-11-18 10:45:20.211505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.357 [2024-11-18 10:45:20.211541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.357 [2024-11-18 10:45:20.211618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:54.357 [2024-11-18 10:45:20.211669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.357 [2024-11-18 10:45:20.211778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:54.357 [2024-11-18 10:45:20.211790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.357 [2024-11-18 10:45:20.212021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:54.357 [2024-11-18 10:45:20.212228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:54.357 [2024-11-18 10:45:20.212239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:54.357 [2024-11-18 10:45:20.212389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.357 pt2 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.357 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.616 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.616 "name": "raid_bdev1", 00:16:54.616 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:54.616 "strip_size_kb": 0, 00:16:54.616 "state": "online", 00:16:54.616 "raid_level": "raid1", 00:16:54.616 "superblock": true, 00:16:54.616 "num_base_bdevs": 2, 00:16:54.616 "num_base_bdevs_discovered": 1, 00:16:54.616 "num_base_bdevs_operational": 1, 00:16:54.616 "base_bdevs_list": [ 00:16:54.616 { 00:16:54.616 "name": null, 00:16:54.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.616 "is_configured": false, 00:16:54.616 "data_offset": 256, 00:16:54.616 "data_size": 7936 00:16:54.616 }, 00:16:54.616 { 00:16:54.616 "name": "pt2", 00:16:54.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.616 "is_configured": true, 00:16:54.616 "data_offset": 256, 00:16:54.616 "data_size": 7936 00:16:54.616 } 00:16:54.616 ] 00:16:54.616 }' 00:16:54.616 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.616 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.875 [2024-11-18 10:45:20.668188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.875 [2024-11-18 10:45:20.668266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.875 [2024-11-18 10:45:20.668358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.875 [2024-11-18 10:45:20.668442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.875 [2024-11-18 10:45:20.668491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.875 [2024-11-18 10:45:20.732075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.875 [2024-11-18 10:45:20.732183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.875 [2024-11-18 10:45:20.732220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:54.875 [2024-11-18 10:45:20.732249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.875 [2024-11-18 10:45:20.734694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.875 [2024-11-18 10:45:20.734762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.875 [2024-11-18 10:45:20.734881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:54.875 [2024-11-18 10:45:20.734946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.875 [2024-11-18 10:45:20.735164] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:54.875 [2024-11-18 10:45:20.735225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.875 [2024-11-18 10:45:20.735266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:54.875 [2024-11-18 10:45:20.735397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.875 [2024-11-18 10:45:20.735518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:54.875 [2024-11-18 10:45:20.735553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.875 [2024-11-18 10:45:20.735821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:54.875 [2024-11-18 10:45:20.736007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:54.875 [2024-11-18 10:45:20.736051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:54.875 [2024-11-18 10:45:20.736289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.875 pt1 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.875 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.134 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.134 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.134 "name": "raid_bdev1", 00:16:55.134 "uuid": "0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5", 00:16:55.134 "strip_size_kb": 0, 00:16:55.134 "state": "online", 00:16:55.134 "raid_level": "raid1", 00:16:55.134 "superblock": true, 00:16:55.134 "num_base_bdevs": 2, 00:16:55.134 "num_base_bdevs_discovered": 1, 00:16:55.135 "num_base_bdevs_operational": 1, 00:16:55.135 "base_bdevs_list": [ 00:16:55.135 { 00:16:55.135 "name": null, 00:16:55.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.135 "is_configured": false, 00:16:55.135 "data_offset": 256, 00:16:55.135 "data_size": 7936 00:16:55.135 }, 00:16:55.135 { 00:16:55.135 "name": "pt2", 00:16:55.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.135 "is_configured": true, 00:16:55.135 "data_offset": 256, 00:16:55.135 "data_size": 7936 00:16:55.135 } 00:16:55.135 ] 00:16:55.135 }' 00:16:55.135 10:45:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.135 10:45:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.394 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.394 [2024-11-18 10:45:21.267574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5 '!=' 0e67b8a4-06f2-4fe6-b4fb-d7cc67a08ca5 ']' 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85977 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85977 ']' 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85977 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85977 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.653 killing process with pid 85977 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85977' 00:16:55.653 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85977 00:16:55.653 [2024-11-18 10:45:21.348529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.653 [2024-11-18 10:45:21.348611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.653 [2024-11-18 10:45:21.348656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.654 [2024-11-18 10:45:21.348672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:55.654 10:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85977 00:16:55.913 [2024-11-18 10:45:21.561674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.852 ************************************ 00:16:56.852 END TEST raid_superblock_test_4k 00:16:56.852 10:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:56.852 00:16:56.852 real 0m6.176s 00:16:56.852 user 0m9.244s 00:16:56.852 sys 0m1.238s 00:16:56.852 10:45:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.852 10:45:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.852 ************************************ 00:16:57.112 10:45:22 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:57.112 10:45:22 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:57.112 10:45:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:57.112 10:45:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.112 10:45:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.112 ************************************ 00:16:57.112 START TEST raid_rebuild_test_sb_4k 00:16:57.112 ************************************ 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86304 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86304 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86304 ']' 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.112 10:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.112 [2024-11-18 10:45:22.892805] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:57.112 [2024-11-18 10:45:22.893003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:57.112 Zero copy mechanism will not be used. 00:16:57.112 -allocations --file-prefix=spdk_pid86304 ] 00:16:57.372 [2024-11-18 10:45:23.072026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.372 [2024-11-18 10:45:23.203376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.632 [2024-11-18 10:45:23.424789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.632 [2024-11-18 10:45:23.424930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.892 BaseBdev1_malloc 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.892 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 [2024-11-18 10:45:23.745696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:57.893 [2024-11-18 10:45:23.745776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.893 [2024-11-18 10:45:23.745802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.893 [2024-11-18 10:45:23.745814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.893 [2024-11-18 10:45:23.748231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.893 [2024-11-18 10:45:23.748339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.893 BaseBdev1 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.893 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.152 BaseBdev2_malloc 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.152 [2024-11-18 10:45:23.805503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:58.152 [2024-11-18 10:45:23.805566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.152 [2024-11-18 10:45:23.805585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.152 [2024-11-18 10:45:23.805598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.152 [2024-11-18 10:45:23.807945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.152 [2024-11-18 10:45:23.807982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:58.152 BaseBdev2 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.152 spare_malloc 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.152 spare_delay 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.152 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.152 [2024-11-18 10:45:23.888543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:58.152 [2024-11-18 10:45:23.888601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.152 [2024-11-18 10:45:23.888620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:58.152 [2024-11-18 10:45:23.888631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.152 [2024-11-18 10:45:23.891013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.152 [2024-11-18 10:45:23.891049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:58.152 spare 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.153 [2024-11-18 10:45:23.900575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.153 [2024-11-18 10:45:23.902653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.153 [2024-11-18 10:45:23.902824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:58.153 [2024-11-18 10:45:23.902840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.153 [2024-11-18 10:45:23.903084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:58.153 [2024-11-18 10:45:23.903344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:58.153 [2024-11-18 10:45:23.903374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:58.153 [2024-11-18 10:45:23.903551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.153 "name": "raid_bdev1", 00:16:58.153 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:16:58.153 "strip_size_kb": 0, 00:16:58.153 "state": "online", 00:16:58.153 "raid_level": "raid1", 00:16:58.153 "superblock": true, 00:16:58.153 "num_base_bdevs": 2, 00:16:58.153 "num_base_bdevs_discovered": 2, 00:16:58.153 "num_base_bdevs_operational": 2, 00:16:58.153 "base_bdevs_list": [ 00:16:58.153 { 00:16:58.153 "name": "BaseBdev1", 00:16:58.153 "uuid": "35ad4236-a5a1-51fc-ac11-d637a41628e7", 00:16:58.153 "is_configured": true, 00:16:58.153 "data_offset": 256, 00:16:58.153 "data_size": 7936 00:16:58.153 }, 00:16:58.153 { 00:16:58.153 "name": "BaseBdev2", 00:16:58.153 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:16:58.153 "is_configured": true, 00:16:58.153 "data_offset": 256, 00:16:58.153 "data_size": 7936 00:16:58.153 } 00:16:58.153 ] 00:16:58.153 }' 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.153 10:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.721 [2024-11-18 10:45:24.336100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.721 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:58.721 [2024-11-18 10:45:24.595453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:58.980 /dev/nbd0 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.980 1+0 records in 00:16:58.980 1+0 records out 00:16:58.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363152 s, 11.3 MB/s 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.980 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:58.981 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:58.981 10:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:59.548 7936+0 records in 00:16:59.548 7936+0 records out 00:16:59.548 32505856 bytes (33 MB, 31 MiB) copied, 0.651234 s, 49.9 MB/s 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.548 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.807 [2024-11-18 10:45:25.528294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.807 [2024-11-18 10:45:25.545786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.807 "name": "raid_bdev1", 00:16:59.807 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:16:59.807 "strip_size_kb": 0, 00:16:59.807 "state": "online", 00:16:59.807 "raid_level": "raid1", 00:16:59.807 "superblock": true, 00:16:59.807 "num_base_bdevs": 2, 00:16:59.807 "num_base_bdevs_discovered": 1, 00:16:59.807 "num_base_bdevs_operational": 1, 00:16:59.807 "base_bdevs_list": [ 00:16:59.807 { 00:16:59.807 "name": null, 00:16:59.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.807 "is_configured": false, 00:16:59.807 "data_offset": 0, 00:16:59.807 "data_size": 7936 00:16:59.807 }, 00:16:59.807 { 00:16:59.807 "name": "BaseBdev2", 00:16:59.807 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:16:59.807 "is_configured": true, 00:16:59.807 "data_offset": 256, 00:16:59.807 "data_size": 7936 00:16:59.807 } 00:16:59.807 ] 00:16:59.807 }' 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.807 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.375 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.375 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.375 10:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.375 [2024-11-18 10:45:26.000981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.375 [2024-11-18 10:45:26.017057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:00.376 10:45:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.376 [2024-11-18 10:45:26.018815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.376 10:45:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.315 "name": "raid_bdev1", 00:17:01.315 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:01.315 "strip_size_kb": 0, 00:17:01.315 "state": "online", 00:17:01.315 "raid_level": "raid1", 00:17:01.315 "superblock": true, 00:17:01.315 "num_base_bdevs": 2, 00:17:01.315 "num_base_bdevs_discovered": 2, 00:17:01.315 "num_base_bdevs_operational": 2, 00:17:01.315 "process": { 00:17:01.315 "type": "rebuild", 00:17:01.315 "target": "spare", 00:17:01.315 "progress": { 00:17:01.315 "blocks": 2560, 00:17:01.315 "percent": 32 00:17:01.315 } 00:17:01.315 }, 00:17:01.315 "base_bdevs_list": [ 00:17:01.315 { 00:17:01.315 "name": "spare", 00:17:01.315 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:01.315 "is_configured": true, 00:17:01.315 "data_offset": 256, 00:17:01.315 "data_size": 7936 00:17:01.315 }, 00:17:01.315 { 00:17:01.315 "name": "BaseBdev2", 00:17:01.315 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:01.315 "is_configured": true, 00:17:01.315 "data_offset": 256, 00:17:01.315 "data_size": 7936 00:17:01.315 } 00:17:01.315 ] 00:17:01.315 }' 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.315 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.315 [2024-11-18 10:45:27.186035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.574 [2024-11-18 10:45:27.223548] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.574 [2024-11-18 10:45:27.223608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.574 [2024-11-18 10:45:27.223623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.574 [2024-11-18 10:45:27.223632] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.574 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.574 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.574 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.574 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.574 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.575 "name": "raid_bdev1", 00:17:01.575 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:01.575 "strip_size_kb": 0, 00:17:01.575 "state": "online", 00:17:01.575 "raid_level": "raid1", 00:17:01.575 "superblock": true, 00:17:01.575 "num_base_bdevs": 2, 00:17:01.575 "num_base_bdevs_discovered": 1, 00:17:01.575 "num_base_bdevs_operational": 1, 00:17:01.575 "base_bdevs_list": [ 00:17:01.575 { 00:17:01.575 "name": null, 00:17:01.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.575 "is_configured": false, 00:17:01.575 "data_offset": 0, 00:17:01.575 "data_size": 7936 00:17:01.575 }, 00:17:01.575 { 00:17:01.575 "name": "BaseBdev2", 00:17:01.575 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:01.575 "is_configured": true, 00:17:01.575 "data_offset": 256, 00:17:01.575 "data_size": 7936 00:17:01.575 } 00:17:01.575 ] 00:17:01.575 }' 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.575 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.144 "name": "raid_bdev1", 00:17:02.144 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:02.144 "strip_size_kb": 0, 00:17:02.144 "state": "online", 00:17:02.144 "raid_level": "raid1", 00:17:02.144 "superblock": true, 00:17:02.144 "num_base_bdevs": 2, 00:17:02.144 "num_base_bdevs_discovered": 1, 00:17:02.144 "num_base_bdevs_operational": 1, 00:17:02.144 "base_bdevs_list": [ 00:17:02.144 { 00:17:02.144 "name": null, 00:17:02.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.144 "is_configured": false, 00:17:02.144 "data_offset": 0, 00:17:02.144 "data_size": 7936 00:17:02.144 }, 00:17:02.144 { 00:17:02.144 "name": "BaseBdev2", 00:17:02.144 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:02.144 "is_configured": true, 00:17:02.144 "data_offset": 256, 00:17:02.144 "data_size": 7936 00:17:02.144 } 00:17:02.144 ] 00:17:02.144 }' 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 [2024-11-18 10:45:27.907628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.144 [2024-11-18 10:45:27.923111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.144 10:45:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:02.144 [2024-11-18 10:45:27.924921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.084 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.343 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.343 "name": "raid_bdev1", 00:17:03.343 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:03.343 "strip_size_kb": 0, 00:17:03.343 "state": "online", 00:17:03.343 "raid_level": "raid1", 00:17:03.343 "superblock": true, 00:17:03.343 "num_base_bdevs": 2, 00:17:03.343 "num_base_bdevs_discovered": 2, 00:17:03.343 "num_base_bdevs_operational": 2, 00:17:03.343 "process": { 00:17:03.343 "type": "rebuild", 00:17:03.343 "target": "spare", 00:17:03.343 "progress": { 00:17:03.343 "blocks": 2560, 00:17:03.343 "percent": 32 00:17:03.343 } 00:17:03.343 }, 00:17:03.343 "base_bdevs_list": [ 00:17:03.343 { 00:17:03.343 "name": "spare", 00:17:03.343 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:03.343 "is_configured": true, 00:17:03.343 "data_offset": 256, 00:17:03.343 "data_size": 7936 00:17:03.343 }, 00:17:03.343 { 00:17:03.343 "name": "BaseBdev2", 00:17:03.343 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:03.343 "is_configured": true, 00:17:03.343 "data_offset": 256, 00:17:03.343 "data_size": 7936 00:17:03.343 } 00:17:03.343 ] 00:17:03.343 }' 00:17:03.343 10:45:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:03.343 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=671 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.343 "name": "raid_bdev1", 00:17:03.343 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:03.343 "strip_size_kb": 0, 00:17:03.343 "state": "online", 00:17:03.343 "raid_level": "raid1", 00:17:03.343 "superblock": true, 00:17:03.343 "num_base_bdevs": 2, 00:17:03.343 "num_base_bdevs_discovered": 2, 00:17:03.343 "num_base_bdevs_operational": 2, 00:17:03.343 "process": { 00:17:03.343 "type": "rebuild", 00:17:03.343 "target": "spare", 00:17:03.343 "progress": { 00:17:03.343 "blocks": 2816, 00:17:03.343 "percent": 35 00:17:03.343 } 00:17:03.343 }, 00:17:03.343 "base_bdevs_list": [ 00:17:03.343 { 00:17:03.343 "name": "spare", 00:17:03.343 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:03.343 "is_configured": true, 00:17:03.343 "data_offset": 256, 00:17:03.343 "data_size": 7936 00:17:03.343 }, 00:17:03.343 { 00:17:03.343 "name": "BaseBdev2", 00:17:03.343 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:03.343 "is_configured": true, 00:17:03.343 "data_offset": 256, 00:17:03.343 "data_size": 7936 00:17:03.343 } 00:17:03.343 ] 00:17:03.343 }' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.343 10:45:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.723 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.723 "name": "raid_bdev1", 00:17:04.723 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:04.723 "strip_size_kb": 0, 00:17:04.723 "state": "online", 00:17:04.723 "raid_level": "raid1", 00:17:04.723 "superblock": true, 00:17:04.723 "num_base_bdevs": 2, 00:17:04.723 "num_base_bdevs_discovered": 2, 00:17:04.723 "num_base_bdevs_operational": 2, 00:17:04.723 "process": { 00:17:04.723 "type": "rebuild", 00:17:04.723 "target": "spare", 00:17:04.723 "progress": { 00:17:04.723 "blocks": 5632, 00:17:04.723 "percent": 70 00:17:04.723 } 00:17:04.724 }, 00:17:04.724 "base_bdevs_list": [ 00:17:04.724 { 00:17:04.724 "name": "spare", 00:17:04.724 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:04.724 "is_configured": true, 00:17:04.724 "data_offset": 256, 00:17:04.724 "data_size": 7936 00:17:04.724 }, 00:17:04.724 { 00:17:04.724 "name": "BaseBdev2", 00:17:04.724 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:04.724 "is_configured": true, 00:17:04.724 "data_offset": 256, 00:17:04.724 "data_size": 7936 00:17:04.724 } 00:17:04.724 ] 00:17:04.724 }' 00:17:04.724 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.724 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.724 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.724 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.724 10:45:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.316 [2024-11-18 10:45:31.036255] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:05.316 [2024-11-18 10:45:31.036364] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:05.316 [2024-11-18 10:45:31.036479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.576 "name": "raid_bdev1", 00:17:05.576 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:05.576 "strip_size_kb": 0, 00:17:05.576 "state": "online", 00:17:05.576 "raid_level": "raid1", 00:17:05.576 "superblock": true, 00:17:05.576 "num_base_bdevs": 2, 00:17:05.576 "num_base_bdevs_discovered": 2, 00:17:05.576 "num_base_bdevs_operational": 2, 00:17:05.576 "base_bdevs_list": [ 00:17:05.576 { 00:17:05.576 "name": "spare", 00:17:05.576 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:05.576 "is_configured": true, 00:17:05.576 "data_offset": 256, 00:17:05.576 "data_size": 7936 00:17:05.576 }, 00:17:05.576 { 00:17:05.576 "name": "BaseBdev2", 00:17:05.576 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:05.576 "is_configured": true, 00:17:05.576 "data_offset": 256, 00:17:05.576 "data_size": 7936 00:17:05.576 } 00:17:05.576 ] 00:17:05.576 }' 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:05.576 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.836 "name": "raid_bdev1", 00:17:05.836 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:05.836 "strip_size_kb": 0, 00:17:05.836 "state": "online", 00:17:05.836 "raid_level": "raid1", 00:17:05.836 "superblock": true, 00:17:05.836 "num_base_bdevs": 2, 00:17:05.836 "num_base_bdevs_discovered": 2, 00:17:05.836 "num_base_bdevs_operational": 2, 00:17:05.836 "base_bdevs_list": [ 00:17:05.836 { 00:17:05.836 "name": "spare", 00:17:05.836 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:05.836 "is_configured": true, 00:17:05.836 "data_offset": 256, 00:17:05.836 "data_size": 7936 00:17:05.836 }, 00:17:05.836 { 00:17:05.836 "name": "BaseBdev2", 00:17:05.836 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:05.836 "is_configured": true, 00:17:05.836 "data_offset": 256, 00:17:05.836 "data_size": 7936 00:17:05.836 } 00:17:05.836 ] 00:17:05.836 }' 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.836 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.837 "name": "raid_bdev1", 00:17:05.837 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:05.837 "strip_size_kb": 0, 00:17:05.837 "state": "online", 00:17:05.837 "raid_level": "raid1", 00:17:05.837 "superblock": true, 00:17:05.837 "num_base_bdevs": 2, 00:17:05.837 "num_base_bdevs_discovered": 2, 00:17:05.837 "num_base_bdevs_operational": 2, 00:17:05.837 "base_bdevs_list": [ 00:17:05.837 { 00:17:05.837 "name": "spare", 00:17:05.837 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:05.837 "is_configured": true, 00:17:05.837 "data_offset": 256, 00:17:05.837 "data_size": 7936 00:17:05.837 }, 00:17:05.837 { 00:17:05.837 "name": "BaseBdev2", 00:17:05.837 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:05.837 "is_configured": true, 00:17:05.837 "data_offset": 256, 00:17:05.837 "data_size": 7936 00:17:05.837 } 00:17:05.837 ] 00:17:05.837 }' 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.837 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.097 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.097 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.097 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.357 [2024-11-18 10:45:31.983449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.357 [2024-11-18 10:45:31.983515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.357 [2024-11-18 10:45:31.983605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.357 [2024-11-18 10:45:31.983679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.357 [2024-11-18 10:45:31.983733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.357 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.357 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.357 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.357 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.357 10:45:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:06.357 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.358 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.358 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.358 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:06.358 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.358 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.358 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:06.618 /dev/nbd0 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.618 1+0 records in 00:17:06.618 1+0 records out 00:17:06.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245249 s, 16.7 MB/s 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.618 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:06.618 /dev/nbd1 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.879 1+0 records in 00:17:06.879 1+0 records out 00:17:06.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354252 s, 11.6 MB/s 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:06.879 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.139 10:45:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:07.399 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.400 [2024-11-18 10:45:33.181630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.400 [2024-11-18 10:45:33.181686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.400 [2024-11-18 10:45:33.181708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:07.400 [2024-11-18 10:45:33.181718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.400 [2024-11-18 10:45:33.183928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.400 [2024-11-18 10:45:33.183962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.400 [2024-11-18 10:45:33.184053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:07.400 [2024-11-18 10:45:33.184104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.400 [2024-11-18 10:45:33.184273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.400 spare 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.400 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.660 [2024-11-18 10:45:33.284170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:07.660 [2024-11-18 10:45:33.284223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.660 [2024-11-18 10:45:33.284478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:07.660 [2024-11-18 10:45:33.284646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:07.660 [2024-11-18 10:45:33.284667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:07.660 [2024-11-18 10:45:33.284826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.660 "name": "raid_bdev1", 00:17:07.660 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:07.660 "strip_size_kb": 0, 00:17:07.660 "state": "online", 00:17:07.660 "raid_level": "raid1", 00:17:07.660 "superblock": true, 00:17:07.660 "num_base_bdevs": 2, 00:17:07.660 "num_base_bdevs_discovered": 2, 00:17:07.660 "num_base_bdevs_operational": 2, 00:17:07.660 "base_bdevs_list": [ 00:17:07.660 { 00:17:07.660 "name": "spare", 00:17:07.660 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:07.660 "is_configured": true, 00:17:07.660 "data_offset": 256, 00:17:07.660 "data_size": 7936 00:17:07.660 }, 00:17:07.660 { 00:17:07.660 "name": "BaseBdev2", 00:17:07.660 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:07.660 "is_configured": true, 00:17:07.660 "data_offset": 256, 00:17:07.660 "data_size": 7936 00:17:07.660 } 00:17:07.660 ] 00:17:07.660 }' 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.660 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.920 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.921 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.921 "name": "raid_bdev1", 00:17:07.921 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:07.921 "strip_size_kb": 0, 00:17:07.921 "state": "online", 00:17:07.921 "raid_level": "raid1", 00:17:07.921 "superblock": true, 00:17:07.921 "num_base_bdevs": 2, 00:17:07.921 "num_base_bdevs_discovered": 2, 00:17:07.921 "num_base_bdevs_operational": 2, 00:17:07.921 "base_bdevs_list": [ 00:17:07.921 { 00:17:07.921 "name": "spare", 00:17:07.921 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:07.921 "is_configured": true, 00:17:07.921 "data_offset": 256, 00:17:07.921 "data_size": 7936 00:17:07.921 }, 00:17:07.921 { 00:17:07.921 "name": "BaseBdev2", 00:17:07.921 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:07.921 "is_configured": true, 00:17:07.921 "data_offset": 256, 00:17:07.921 "data_size": 7936 00:17:07.921 } 00:17:07.921 ] 00:17:07.921 }' 00:17:07.921 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.921 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.181 [2024-11-18 10:45:33.912413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.181 "name": "raid_bdev1", 00:17:08.181 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:08.181 "strip_size_kb": 0, 00:17:08.181 "state": "online", 00:17:08.181 "raid_level": "raid1", 00:17:08.181 "superblock": true, 00:17:08.181 "num_base_bdevs": 2, 00:17:08.181 "num_base_bdevs_discovered": 1, 00:17:08.181 "num_base_bdevs_operational": 1, 00:17:08.181 "base_bdevs_list": [ 00:17:08.181 { 00:17:08.181 "name": null, 00:17:08.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.181 "is_configured": false, 00:17:08.181 "data_offset": 0, 00:17:08.181 "data_size": 7936 00:17:08.181 }, 00:17:08.181 { 00:17:08.181 "name": "BaseBdev2", 00:17:08.181 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:08.181 "is_configured": true, 00:17:08.181 "data_offset": 256, 00:17:08.181 "data_size": 7936 00:17:08.181 } 00:17:08.181 ] 00:17:08.181 }' 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.181 10:45:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.441 10:45:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.441 10:45:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.441 10:45:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.441 [2024-11-18 10:45:34.311737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.441 [2024-11-18 10:45:34.311882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:08.441 [2024-11-18 10:45:34.311898] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:08.441 [2024-11-18 10:45:34.311926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.700 [2024-11-18 10:45:34.326743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:08.700 10:45:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.700 10:45:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:08.700 [2024-11-18 10:45:34.328590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.640 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.640 "name": "raid_bdev1", 00:17:09.641 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:09.641 "strip_size_kb": 0, 00:17:09.641 "state": "online", 00:17:09.641 "raid_level": "raid1", 00:17:09.641 "superblock": true, 00:17:09.641 "num_base_bdevs": 2, 00:17:09.641 "num_base_bdevs_discovered": 2, 00:17:09.641 "num_base_bdevs_operational": 2, 00:17:09.641 "process": { 00:17:09.641 "type": "rebuild", 00:17:09.641 "target": "spare", 00:17:09.641 "progress": { 00:17:09.641 "blocks": 2560, 00:17:09.641 "percent": 32 00:17:09.641 } 00:17:09.641 }, 00:17:09.641 "base_bdevs_list": [ 00:17:09.641 { 00:17:09.641 "name": "spare", 00:17:09.641 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:09.641 "is_configured": true, 00:17:09.641 "data_offset": 256, 00:17:09.641 "data_size": 7936 00:17:09.641 }, 00:17:09.641 { 00:17:09.641 "name": "BaseBdev2", 00:17:09.641 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:09.641 "is_configured": true, 00:17:09.641 "data_offset": 256, 00:17:09.641 "data_size": 7936 00:17:09.641 } 00:17:09.641 ] 00:17:09.641 }' 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.641 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 [2024-11-18 10:45:35.487776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.901 [2024-11-18 10:45:35.533217] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:09.901 [2024-11-18 10:45:35.533268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.901 [2024-11-18 10:45:35.533281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.901 [2024-11-18 10:45:35.533291] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.901 "name": "raid_bdev1", 00:17:09.901 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:09.901 "strip_size_kb": 0, 00:17:09.901 "state": "online", 00:17:09.901 "raid_level": "raid1", 00:17:09.901 "superblock": true, 00:17:09.901 "num_base_bdevs": 2, 00:17:09.901 "num_base_bdevs_discovered": 1, 00:17:09.901 "num_base_bdevs_operational": 1, 00:17:09.901 "base_bdevs_list": [ 00:17:09.901 { 00:17:09.901 "name": null, 00:17:09.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.901 "is_configured": false, 00:17:09.901 "data_offset": 0, 00:17:09.901 "data_size": 7936 00:17:09.901 }, 00:17:09.901 { 00:17:09.901 "name": "BaseBdev2", 00:17:09.901 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:09.901 "is_configured": true, 00:17:09.901 "data_offset": 256, 00:17:09.901 "data_size": 7936 00:17:09.901 } 00:17:09.901 ] 00:17:09.901 }' 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.901 10:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.162 10:45:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:10.162 10:45:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.162 10:45:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.162 [2024-11-18 10:45:36.028012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:10.162 [2024-11-18 10:45:36.028061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.162 [2024-11-18 10:45:36.028079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:10.162 [2024-11-18 10:45:36.028090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.162 [2024-11-18 10:45:36.028541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.162 [2024-11-18 10:45:36.028569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:10.162 [2024-11-18 10:45:36.028645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:10.162 [2024-11-18 10:45:36.028659] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.162 [2024-11-18 10:45:36.028670] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:10.162 [2024-11-18 10:45:36.028694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.162 [2024-11-18 10:45:36.043461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:10.162 spare 00:17:10.162 10:45:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.162 10:45:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:10.162 [2024-11-18 10:45:36.045235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.544 "name": "raid_bdev1", 00:17:11.544 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:11.544 "strip_size_kb": 0, 00:17:11.544 "state": "online", 00:17:11.544 "raid_level": "raid1", 00:17:11.544 "superblock": true, 00:17:11.544 "num_base_bdevs": 2, 00:17:11.544 "num_base_bdevs_discovered": 2, 00:17:11.544 "num_base_bdevs_operational": 2, 00:17:11.544 "process": { 00:17:11.544 "type": "rebuild", 00:17:11.544 "target": "spare", 00:17:11.544 "progress": { 00:17:11.544 "blocks": 2560, 00:17:11.544 "percent": 32 00:17:11.544 } 00:17:11.544 }, 00:17:11.544 "base_bdevs_list": [ 00:17:11.544 { 00:17:11.544 "name": "spare", 00:17:11.544 "uuid": "6b107a37-a042-5b13-8449-ee2fa0a1dd06", 00:17:11.544 "is_configured": true, 00:17:11.544 "data_offset": 256, 00:17:11.544 "data_size": 7936 00:17:11.544 }, 00:17:11.544 { 00:17:11.544 "name": "BaseBdev2", 00:17:11.544 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:11.544 "is_configured": true, 00:17:11.544 "data_offset": 256, 00:17:11.544 "data_size": 7936 00:17:11.544 } 00:17:11.544 ] 00:17:11.544 }' 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.544 [2024-11-18 10:45:37.188767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.544 [2024-11-18 10:45:37.249620] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:11.544 [2024-11-18 10:45:37.249670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.544 [2024-11-18 10:45:37.249686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.544 [2024-11-18 10:45:37.249693] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.544 "name": "raid_bdev1", 00:17:11.544 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:11.544 "strip_size_kb": 0, 00:17:11.544 "state": "online", 00:17:11.544 "raid_level": "raid1", 00:17:11.544 "superblock": true, 00:17:11.544 "num_base_bdevs": 2, 00:17:11.544 "num_base_bdevs_discovered": 1, 00:17:11.544 "num_base_bdevs_operational": 1, 00:17:11.544 "base_bdevs_list": [ 00:17:11.544 { 00:17:11.544 "name": null, 00:17:11.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.544 "is_configured": false, 00:17:11.544 "data_offset": 0, 00:17:11.544 "data_size": 7936 00:17:11.544 }, 00:17:11.544 { 00:17:11.544 "name": "BaseBdev2", 00:17:11.544 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:11.544 "is_configured": true, 00:17:11.544 "data_offset": 256, 00:17:11.544 "data_size": 7936 00:17:11.544 } 00:17:11.544 ] 00:17:11.544 }' 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.544 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.114 "name": "raid_bdev1", 00:17:12.114 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:12.114 "strip_size_kb": 0, 00:17:12.114 "state": "online", 00:17:12.114 "raid_level": "raid1", 00:17:12.114 "superblock": true, 00:17:12.114 "num_base_bdevs": 2, 00:17:12.114 "num_base_bdevs_discovered": 1, 00:17:12.114 "num_base_bdevs_operational": 1, 00:17:12.114 "base_bdevs_list": [ 00:17:12.114 { 00:17:12.114 "name": null, 00:17:12.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.114 "is_configured": false, 00:17:12.114 "data_offset": 0, 00:17:12.114 "data_size": 7936 00:17:12.114 }, 00:17:12.114 { 00:17:12.114 "name": "BaseBdev2", 00:17:12.114 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:12.114 "is_configured": true, 00:17:12.114 "data_offset": 256, 00:17:12.114 "data_size": 7936 00:17:12.114 } 00:17:12.114 ] 00:17:12.114 }' 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.114 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.115 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.115 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:12.115 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.115 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.115 [2024-11-18 10:45:37.849602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:12.115 [2024-11-18 10:45:37.849693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.115 [2024-11-18 10:45:37.849718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:12.115 [2024-11-18 10:45:37.849736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.115 [2024-11-18 10:45:37.850157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.115 [2024-11-18 10:45:37.850174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:12.115 [2024-11-18 10:45:37.850263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:12.115 [2024-11-18 10:45:37.850279] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.115 [2024-11-18 10:45:37.850289] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:12.115 [2024-11-18 10:45:37.850298] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:12.115 BaseBdev1 00:17:12.115 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.115 10:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.056 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.056 "name": "raid_bdev1", 00:17:13.056 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:13.056 "strip_size_kb": 0, 00:17:13.056 "state": "online", 00:17:13.057 "raid_level": "raid1", 00:17:13.057 "superblock": true, 00:17:13.057 "num_base_bdevs": 2, 00:17:13.057 "num_base_bdevs_discovered": 1, 00:17:13.057 "num_base_bdevs_operational": 1, 00:17:13.057 "base_bdevs_list": [ 00:17:13.057 { 00:17:13.057 "name": null, 00:17:13.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.057 "is_configured": false, 00:17:13.057 "data_offset": 0, 00:17:13.057 "data_size": 7936 00:17:13.057 }, 00:17:13.057 { 00:17:13.057 "name": "BaseBdev2", 00:17:13.057 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:13.057 "is_configured": true, 00:17:13.057 "data_offset": 256, 00:17:13.057 "data_size": 7936 00:17:13.057 } 00:17:13.057 ] 00:17:13.057 }' 00:17:13.057 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.057 10:45:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.626 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.626 "name": "raid_bdev1", 00:17:13.627 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:13.627 "strip_size_kb": 0, 00:17:13.627 "state": "online", 00:17:13.627 "raid_level": "raid1", 00:17:13.627 "superblock": true, 00:17:13.627 "num_base_bdevs": 2, 00:17:13.627 "num_base_bdevs_discovered": 1, 00:17:13.627 "num_base_bdevs_operational": 1, 00:17:13.627 "base_bdevs_list": [ 00:17:13.627 { 00:17:13.627 "name": null, 00:17:13.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.627 "is_configured": false, 00:17:13.627 "data_offset": 0, 00:17:13.627 "data_size": 7936 00:17:13.627 }, 00:17:13.627 { 00:17:13.627 "name": "BaseBdev2", 00:17:13.627 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:13.627 "is_configured": true, 00:17:13.627 "data_offset": 256, 00:17:13.627 "data_size": 7936 00:17:13.627 } 00:17:13.627 ] 00:17:13.627 }' 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.627 [2024-11-18 10:45:39.411023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.627 [2024-11-18 10:45:39.411147] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:13.627 [2024-11-18 10:45:39.411160] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:13.627 request: 00:17:13.627 { 00:17:13.627 "base_bdev": "BaseBdev1", 00:17:13.627 "raid_bdev": "raid_bdev1", 00:17:13.627 "method": "bdev_raid_add_base_bdev", 00:17:13.627 "req_id": 1 00:17:13.627 } 00:17:13.627 Got JSON-RPC error response 00:17:13.627 response: 00:17:13.627 { 00:17:13.627 "code": -22, 00:17:13.627 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:13.627 } 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.627 10:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.567 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.827 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.827 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.827 "name": "raid_bdev1", 00:17:14.827 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:14.827 "strip_size_kb": 0, 00:17:14.827 "state": "online", 00:17:14.827 "raid_level": "raid1", 00:17:14.827 "superblock": true, 00:17:14.827 "num_base_bdevs": 2, 00:17:14.827 "num_base_bdevs_discovered": 1, 00:17:14.827 "num_base_bdevs_operational": 1, 00:17:14.827 "base_bdevs_list": [ 00:17:14.827 { 00:17:14.827 "name": null, 00:17:14.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.827 "is_configured": false, 00:17:14.827 "data_offset": 0, 00:17:14.827 "data_size": 7936 00:17:14.827 }, 00:17:14.827 { 00:17:14.827 "name": "BaseBdev2", 00:17:14.827 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:14.827 "is_configured": true, 00:17:14.827 "data_offset": 256, 00:17:14.827 "data_size": 7936 00:17:14.827 } 00:17:14.827 ] 00:17:14.827 }' 00:17:14.827 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.827 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.087 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.087 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.087 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.087 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.087 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.087 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.088 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.088 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.088 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.088 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.088 "name": "raid_bdev1", 00:17:15.088 "uuid": "73e56e86-efc7-49de-8cba-127369aef08a", 00:17:15.088 "strip_size_kb": 0, 00:17:15.088 "state": "online", 00:17:15.088 "raid_level": "raid1", 00:17:15.088 "superblock": true, 00:17:15.088 "num_base_bdevs": 2, 00:17:15.088 "num_base_bdevs_discovered": 1, 00:17:15.088 "num_base_bdevs_operational": 1, 00:17:15.088 "base_bdevs_list": [ 00:17:15.088 { 00:17:15.088 "name": null, 00:17:15.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.088 "is_configured": false, 00:17:15.088 "data_offset": 0, 00:17:15.088 "data_size": 7936 00:17:15.088 }, 00:17:15.088 { 00:17:15.088 "name": "BaseBdev2", 00:17:15.088 "uuid": "028e3c5f-b137-5333-a128-5be99b4e8de6", 00:17:15.088 "is_configured": true, 00:17:15.088 "data_offset": 256, 00:17:15.088 "data_size": 7936 00:17:15.088 } 00:17:15.088 ] 00:17:15.088 }' 00:17:15.088 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.348 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.348 10:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86304 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86304 ']' 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86304 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86304 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86304' 00:17:15.348 killing process with pid 86304 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86304 00:17:15.348 Received shutdown signal, test time was about 60.000000 seconds 00:17:15.348 00:17:15.348 Latency(us) 00:17:15.348 [2024-11-18T10:45:41.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.348 [2024-11-18T10:45:41.233Z] =================================================================================================================== 00:17:15.348 [2024-11-18T10:45:41.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:15.348 [2024-11-18 10:45:41.082378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.348 [2024-11-18 10:45:41.082477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.348 [2024-11-18 10:45:41.082519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.348 [2024-11-18 10:45:41.082531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:15.348 10:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86304 00:17:15.609 [2024-11-18 10:45:41.364736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.549 10:45:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:16.549 00:17:16.549 real 0m19.602s 00:17:16.549 user 0m25.372s 00:17:16.549 sys 0m2.820s 00:17:16.549 10:45:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.549 10:45:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.549 ************************************ 00:17:16.549 END TEST raid_rebuild_test_sb_4k 00:17:16.549 ************************************ 00:17:16.809 10:45:42 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:16.809 10:45:42 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:16.809 10:45:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:16.809 10:45:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.809 10:45:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.809 ************************************ 00:17:16.809 START TEST raid_state_function_test_sb_md_separate 00:17:16.809 ************************************ 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:16.809 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86990 00:17:16.810 Process raid pid: 86990 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86990' 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86990 00:17:16.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86990 ']' 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.810 10:45:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.810 [2024-11-18 10:45:42.573314] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:16.810 [2024-11-18 10:45:42.573518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.070 [2024-11-18 10:45:42.751552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.070 [2024-11-18 10:45:42.856556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.329 [2024-11-18 10:45:43.050106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.329 [2024-11-18 10:45:43.050143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.590 [2024-11-18 10:45:43.382318] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.590 [2024-11-18 10:45:43.382393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.590 [2024-11-18 10:45:43.382403] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.590 [2024-11-18 10:45:43.382412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.590 "name": "Existed_Raid", 00:17:17.590 "uuid": "7197a556-c150-48ce-bf5f-ce83e03d1d26", 00:17:17.590 "strip_size_kb": 0, 00:17:17.590 "state": "configuring", 00:17:17.590 "raid_level": "raid1", 00:17:17.590 "superblock": true, 00:17:17.590 "num_base_bdevs": 2, 00:17:17.590 "num_base_bdevs_discovered": 0, 00:17:17.590 "num_base_bdevs_operational": 2, 00:17:17.590 "base_bdevs_list": [ 00:17:17.590 { 00:17:17.590 "name": "BaseBdev1", 00:17:17.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.590 "is_configured": false, 00:17:17.590 "data_offset": 0, 00:17:17.590 "data_size": 0 00:17:17.590 }, 00:17:17.590 { 00:17:17.590 "name": "BaseBdev2", 00:17:17.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.590 "is_configured": false, 00:17:17.590 "data_offset": 0, 00:17:17.590 "data_size": 0 00:17:17.590 } 00:17:17.590 ] 00:17:17.590 }' 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.590 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 [2024-11-18 10:45:43.841440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.160 [2024-11-18 10:45:43.841511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 [2024-11-18 10:45:43.853421] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.160 [2024-11-18 10:45:43.853505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.160 [2024-11-18 10:45:43.853531] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.160 [2024-11-18 10:45:43.853555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 [2024-11-18 10:45:43.895560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.160 BaseBdev1 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.160 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 [ 00:17:18.160 { 00:17:18.160 "name": "BaseBdev1", 00:17:18.160 "aliases": [ 00:17:18.160 "6b33f0f1-ab15-4101-a67e-3e590069b358" 00:17:18.160 ], 00:17:18.160 "product_name": "Malloc disk", 00:17:18.160 "block_size": 4096, 00:17:18.160 "num_blocks": 8192, 00:17:18.160 "uuid": "6b33f0f1-ab15-4101-a67e-3e590069b358", 00:17:18.160 "md_size": 32, 00:17:18.160 "md_interleave": false, 00:17:18.160 "dif_type": 0, 00:17:18.160 "assigned_rate_limits": { 00:17:18.160 "rw_ios_per_sec": 0, 00:17:18.160 "rw_mbytes_per_sec": 0, 00:17:18.160 "r_mbytes_per_sec": 0, 00:17:18.160 "w_mbytes_per_sec": 0 00:17:18.160 }, 00:17:18.160 "claimed": true, 00:17:18.160 "claim_type": "exclusive_write", 00:17:18.160 "zoned": false, 00:17:18.160 "supported_io_types": { 00:17:18.160 "read": true, 00:17:18.160 "write": true, 00:17:18.160 "unmap": true, 00:17:18.160 "flush": true, 00:17:18.160 "reset": true, 00:17:18.160 "nvme_admin": false, 00:17:18.160 "nvme_io": false, 00:17:18.160 "nvme_io_md": false, 00:17:18.160 "write_zeroes": true, 00:17:18.160 "zcopy": true, 00:17:18.160 "get_zone_info": false, 00:17:18.161 "zone_management": false, 00:17:18.161 "zone_append": false, 00:17:18.161 "compare": false, 00:17:18.161 "compare_and_write": false, 00:17:18.161 "abort": true, 00:17:18.161 "seek_hole": false, 00:17:18.161 "seek_data": false, 00:17:18.161 "copy": true, 00:17:18.161 "nvme_iov_md": false 00:17:18.161 }, 00:17:18.161 "memory_domains": [ 00:17:18.161 { 00:17:18.161 "dma_device_id": "system", 00:17:18.161 "dma_device_type": 1 00:17:18.161 }, 00:17:18.161 { 00:17:18.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.161 "dma_device_type": 2 00:17:18.161 } 00:17:18.161 ], 00:17:18.161 "driver_specific": {} 00:17:18.161 } 00:17:18.161 ] 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.161 "name": "Existed_Raid", 00:17:18.161 "uuid": "eafeb0b1-9d8d-49b0-922c-b7dc19b211e9", 00:17:18.161 "strip_size_kb": 0, 00:17:18.161 "state": "configuring", 00:17:18.161 "raid_level": "raid1", 00:17:18.161 "superblock": true, 00:17:18.161 "num_base_bdevs": 2, 00:17:18.161 "num_base_bdevs_discovered": 1, 00:17:18.161 "num_base_bdevs_operational": 2, 00:17:18.161 "base_bdevs_list": [ 00:17:18.161 { 00:17:18.161 "name": "BaseBdev1", 00:17:18.161 "uuid": "6b33f0f1-ab15-4101-a67e-3e590069b358", 00:17:18.161 "is_configured": true, 00:17:18.161 "data_offset": 256, 00:17:18.161 "data_size": 7936 00:17:18.161 }, 00:17:18.161 { 00:17:18.161 "name": "BaseBdev2", 00:17:18.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.161 "is_configured": false, 00:17:18.161 "data_offset": 0, 00:17:18.161 "data_size": 0 00:17:18.161 } 00:17:18.161 ] 00:17:18.161 }' 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.161 10:45:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 [2024-11-18 10:45:44.366852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.731 [2024-11-18 10:45:44.366887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 [2024-11-18 10:45:44.378876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.731 [2024-11-18 10:45:44.380646] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.731 [2024-11-18 10:45:44.380687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.731 "name": "Existed_Raid", 00:17:18.731 "uuid": "6d32ff43-2799-4c4f-a48b-50143a082705", 00:17:18.731 "strip_size_kb": 0, 00:17:18.731 "state": "configuring", 00:17:18.731 "raid_level": "raid1", 00:17:18.731 "superblock": true, 00:17:18.731 "num_base_bdevs": 2, 00:17:18.731 "num_base_bdevs_discovered": 1, 00:17:18.731 "num_base_bdevs_operational": 2, 00:17:18.731 "base_bdevs_list": [ 00:17:18.731 { 00:17:18.731 "name": "BaseBdev1", 00:17:18.731 "uuid": "6b33f0f1-ab15-4101-a67e-3e590069b358", 00:17:18.731 "is_configured": true, 00:17:18.731 "data_offset": 256, 00:17:18.731 "data_size": 7936 00:17:18.731 }, 00:17:18.731 { 00:17:18.731 "name": "BaseBdev2", 00:17:18.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.731 "is_configured": false, 00:17:18.731 "data_offset": 0, 00:17:18.731 "data_size": 0 00:17:18.731 } 00:17:18.731 ] 00:17:18.731 }' 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.731 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.991 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:18.991 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.991 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.251 [2024-11-18 10:45:44.877186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.251 [2024-11-18 10:45:44.877462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:19.251 [2024-11-18 10:45:44.877498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.251 [2024-11-18 10:45:44.877617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:19.251 [2024-11-18 10:45:44.877776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:19.251 [2024-11-18 10:45:44.877815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:19.251 [2024-11-18 10:45:44.877933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.251 BaseBdev2 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.251 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.251 [ 00:17:19.251 { 00:17:19.251 "name": "BaseBdev2", 00:17:19.251 "aliases": [ 00:17:19.251 "de5f2b2a-45bf-46e8-9bb8-9e402216b0a2" 00:17:19.251 ], 00:17:19.251 "product_name": "Malloc disk", 00:17:19.251 "block_size": 4096, 00:17:19.251 "num_blocks": 8192, 00:17:19.251 "uuid": "de5f2b2a-45bf-46e8-9bb8-9e402216b0a2", 00:17:19.251 "md_size": 32, 00:17:19.251 "md_interleave": false, 00:17:19.251 "dif_type": 0, 00:17:19.251 "assigned_rate_limits": { 00:17:19.251 "rw_ios_per_sec": 0, 00:17:19.251 "rw_mbytes_per_sec": 0, 00:17:19.251 "r_mbytes_per_sec": 0, 00:17:19.251 "w_mbytes_per_sec": 0 00:17:19.251 }, 00:17:19.251 "claimed": true, 00:17:19.252 "claim_type": "exclusive_write", 00:17:19.252 "zoned": false, 00:17:19.252 "supported_io_types": { 00:17:19.252 "read": true, 00:17:19.252 "write": true, 00:17:19.252 "unmap": true, 00:17:19.252 "flush": true, 00:17:19.252 "reset": true, 00:17:19.252 "nvme_admin": false, 00:17:19.252 "nvme_io": false, 00:17:19.252 "nvme_io_md": false, 00:17:19.252 "write_zeroes": true, 00:17:19.252 "zcopy": true, 00:17:19.252 "get_zone_info": false, 00:17:19.252 "zone_management": false, 00:17:19.252 "zone_append": false, 00:17:19.252 "compare": false, 00:17:19.252 "compare_and_write": false, 00:17:19.252 "abort": true, 00:17:19.252 "seek_hole": false, 00:17:19.252 "seek_data": false, 00:17:19.252 "copy": true, 00:17:19.252 "nvme_iov_md": false 00:17:19.252 }, 00:17:19.252 "memory_domains": [ 00:17:19.252 { 00:17:19.252 "dma_device_id": "system", 00:17:19.252 "dma_device_type": 1 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.252 "dma_device_type": 2 00:17:19.252 } 00:17:19.252 ], 00:17:19.252 "driver_specific": {} 00:17:19.252 } 00:17:19.252 ] 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.252 "name": "Existed_Raid", 00:17:19.252 "uuid": "6d32ff43-2799-4c4f-a48b-50143a082705", 00:17:19.252 "strip_size_kb": 0, 00:17:19.252 "state": "online", 00:17:19.252 "raid_level": "raid1", 00:17:19.252 "superblock": true, 00:17:19.252 "num_base_bdevs": 2, 00:17:19.252 "num_base_bdevs_discovered": 2, 00:17:19.252 "num_base_bdevs_operational": 2, 00:17:19.252 "base_bdevs_list": [ 00:17:19.252 { 00:17:19.252 "name": "BaseBdev1", 00:17:19.252 "uuid": "6b33f0f1-ab15-4101-a67e-3e590069b358", 00:17:19.252 "is_configured": true, 00:17:19.252 "data_offset": 256, 00:17:19.252 "data_size": 7936 00:17:19.252 }, 00:17:19.252 { 00:17:19.252 "name": "BaseBdev2", 00:17:19.252 "uuid": "de5f2b2a-45bf-46e8-9bb8-9e402216b0a2", 00:17:19.252 "is_configured": true, 00:17:19.252 "data_offset": 256, 00:17:19.252 "data_size": 7936 00:17:19.252 } 00:17:19.252 ] 00:17:19.252 }' 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.252 10:45:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.512 [2024-11-18 10:45:45.336650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.512 "name": "Existed_Raid", 00:17:19.512 "aliases": [ 00:17:19.512 "6d32ff43-2799-4c4f-a48b-50143a082705" 00:17:19.512 ], 00:17:19.512 "product_name": "Raid Volume", 00:17:19.512 "block_size": 4096, 00:17:19.512 "num_blocks": 7936, 00:17:19.512 "uuid": "6d32ff43-2799-4c4f-a48b-50143a082705", 00:17:19.512 "md_size": 32, 00:17:19.512 "md_interleave": false, 00:17:19.512 "dif_type": 0, 00:17:19.512 "assigned_rate_limits": { 00:17:19.512 "rw_ios_per_sec": 0, 00:17:19.512 "rw_mbytes_per_sec": 0, 00:17:19.512 "r_mbytes_per_sec": 0, 00:17:19.512 "w_mbytes_per_sec": 0 00:17:19.512 }, 00:17:19.512 "claimed": false, 00:17:19.512 "zoned": false, 00:17:19.512 "supported_io_types": { 00:17:19.512 "read": true, 00:17:19.512 "write": true, 00:17:19.512 "unmap": false, 00:17:19.512 "flush": false, 00:17:19.512 "reset": true, 00:17:19.512 "nvme_admin": false, 00:17:19.512 "nvme_io": false, 00:17:19.512 "nvme_io_md": false, 00:17:19.512 "write_zeroes": true, 00:17:19.512 "zcopy": false, 00:17:19.512 "get_zone_info": false, 00:17:19.512 "zone_management": false, 00:17:19.512 "zone_append": false, 00:17:19.512 "compare": false, 00:17:19.512 "compare_and_write": false, 00:17:19.512 "abort": false, 00:17:19.512 "seek_hole": false, 00:17:19.512 "seek_data": false, 00:17:19.512 "copy": false, 00:17:19.512 "nvme_iov_md": false 00:17:19.512 }, 00:17:19.512 "memory_domains": [ 00:17:19.512 { 00:17:19.512 "dma_device_id": "system", 00:17:19.512 "dma_device_type": 1 00:17:19.512 }, 00:17:19.512 { 00:17:19.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.512 "dma_device_type": 2 00:17:19.512 }, 00:17:19.512 { 00:17:19.512 "dma_device_id": "system", 00:17:19.512 "dma_device_type": 1 00:17:19.512 }, 00:17:19.512 { 00:17:19.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.512 "dma_device_type": 2 00:17:19.512 } 00:17:19.512 ], 00:17:19.512 "driver_specific": { 00:17:19.512 "raid": { 00:17:19.512 "uuid": "6d32ff43-2799-4c4f-a48b-50143a082705", 00:17:19.512 "strip_size_kb": 0, 00:17:19.512 "state": "online", 00:17:19.512 "raid_level": "raid1", 00:17:19.512 "superblock": true, 00:17:19.512 "num_base_bdevs": 2, 00:17:19.512 "num_base_bdevs_discovered": 2, 00:17:19.512 "num_base_bdevs_operational": 2, 00:17:19.512 "base_bdevs_list": [ 00:17:19.512 { 00:17:19.512 "name": "BaseBdev1", 00:17:19.512 "uuid": "6b33f0f1-ab15-4101-a67e-3e590069b358", 00:17:19.512 "is_configured": true, 00:17:19.512 "data_offset": 256, 00:17:19.512 "data_size": 7936 00:17:19.512 }, 00:17:19.512 { 00:17:19.512 "name": "BaseBdev2", 00:17:19.512 "uuid": "de5f2b2a-45bf-46e8-9bb8-9e402216b0a2", 00:17:19.512 "is_configured": true, 00:17:19.512 "data_offset": 256, 00:17:19.512 "data_size": 7936 00:17:19.512 } 00:17:19.512 ] 00:17:19.512 } 00:17:19.512 } 00:17:19.512 }' 00:17:19.512 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:19.771 BaseBdev2' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.771 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.771 [2024-11-18 10:45:45.572038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.031 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.032 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.032 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.032 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.032 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.032 "name": "Existed_Raid", 00:17:20.032 "uuid": "6d32ff43-2799-4c4f-a48b-50143a082705", 00:17:20.032 "strip_size_kb": 0, 00:17:20.032 "state": "online", 00:17:20.032 "raid_level": "raid1", 00:17:20.032 "superblock": true, 00:17:20.032 "num_base_bdevs": 2, 00:17:20.032 "num_base_bdevs_discovered": 1, 00:17:20.032 "num_base_bdevs_operational": 1, 00:17:20.032 "base_bdevs_list": [ 00:17:20.032 { 00:17:20.032 "name": null, 00:17:20.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.032 "is_configured": false, 00:17:20.032 "data_offset": 0, 00:17:20.032 "data_size": 7936 00:17:20.032 }, 00:17:20.032 { 00:17:20.032 "name": "BaseBdev2", 00:17:20.032 "uuid": "de5f2b2a-45bf-46e8-9bb8-9e402216b0a2", 00:17:20.032 "is_configured": true, 00:17:20.032 "data_offset": 256, 00:17:20.032 "data_size": 7936 00:17:20.032 } 00:17:20.032 ] 00:17:20.032 }' 00:17:20.032 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.032 10:45:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.291 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.551 [2024-11-18 10:45:46.190266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.551 [2024-11-18 10:45:46.190364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.551 [2024-11-18 10:45:46.286814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.551 [2024-11-18 10:45:46.286866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.551 [2024-11-18 10:45:46.286877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86990 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86990 ']' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86990 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86990 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.551 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86990' 00:17:20.551 killing process with pid 86990 00:17:20.552 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86990 00:17:20.552 [2024-11-18 10:45:46.387803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.552 10:45:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86990 00:17:20.552 [2024-11-18 10:45:46.404032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.933 10:45:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:21.933 00:17:21.933 real 0m4.974s 00:17:21.933 user 0m7.169s 00:17:21.933 sys 0m0.921s 00:17:21.933 ************************************ 00:17:21.933 END TEST raid_state_function_test_sb_md_separate 00:17:21.933 ************************************ 00:17:21.933 10:45:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.933 10:45:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.933 10:45:47 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:21.933 10:45:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:21.933 10:45:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.933 10:45:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.933 ************************************ 00:17:21.933 START TEST raid_superblock_test_md_separate 00:17:21.933 ************************************ 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:21.933 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87237 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87237 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87237 ']' 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.934 10:45:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.934 [2024-11-18 10:45:47.623188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:21.934 [2024-11-18 10:45:47.623452] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87237 ] 00:17:21.934 [2024-11-18 10:45:47.802553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.194 [2024-11-18 10:45:47.911766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.458 [2024-11-18 10:45:48.108731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.458 [2024-11-18 10:45:48.108855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.732 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 malloc1 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 [2024-11-18 10:45:48.491132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:22.733 [2024-11-18 10:45:48.491239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.733 [2024-11-18 10:45:48.491276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.733 [2024-11-18 10:45:48.491306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.733 [2024-11-18 10:45:48.493211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.733 [2024-11-18 10:45:48.493278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:22.733 pt1 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 malloc2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 [2024-11-18 10:45:48.547043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.733 [2024-11-18 10:45:48.547091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.733 [2024-11-18 10:45:48.547111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.733 [2024-11-18 10:45:48.547120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.733 [2024-11-18 10:45:48.548909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.733 [2024-11-18 10:45:48.548943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.733 pt2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 [2024-11-18 10:45:48.559050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.733 [2024-11-18 10:45:48.560785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.733 [2024-11-18 10:45:48.560954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:22.733 [2024-11-18 10:45:48.560969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.733 [2024-11-18 10:45:48.561040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:22.733 [2024-11-18 10:45:48.561145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:22.733 [2024-11-18 10:45:48.561156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:22.733 [2024-11-18 10:45:48.561278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.733 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.008 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.008 "name": "raid_bdev1", 00:17:23.008 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:23.008 "strip_size_kb": 0, 00:17:23.008 "state": "online", 00:17:23.008 "raid_level": "raid1", 00:17:23.008 "superblock": true, 00:17:23.008 "num_base_bdevs": 2, 00:17:23.008 "num_base_bdevs_discovered": 2, 00:17:23.008 "num_base_bdevs_operational": 2, 00:17:23.008 "base_bdevs_list": [ 00:17:23.008 { 00:17:23.008 "name": "pt1", 00:17:23.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.008 "is_configured": true, 00:17:23.008 "data_offset": 256, 00:17:23.008 "data_size": 7936 00:17:23.008 }, 00:17:23.008 { 00:17:23.008 "name": "pt2", 00:17:23.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.008 "is_configured": true, 00:17:23.008 "data_offset": 256, 00:17:23.008 "data_size": 7936 00:17:23.008 } 00:17:23.008 ] 00:17:23.008 }' 00:17:23.008 10:45:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.009 10:45:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:23.268 [2024-11-18 10:45:49.042454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.268 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:23.268 "name": "raid_bdev1", 00:17:23.268 "aliases": [ 00:17:23.268 "a487091d-26b8-4235-9458-5970afc3709d" 00:17:23.268 ], 00:17:23.268 "product_name": "Raid Volume", 00:17:23.268 "block_size": 4096, 00:17:23.268 "num_blocks": 7936, 00:17:23.268 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:23.268 "md_size": 32, 00:17:23.268 "md_interleave": false, 00:17:23.268 "dif_type": 0, 00:17:23.268 "assigned_rate_limits": { 00:17:23.268 "rw_ios_per_sec": 0, 00:17:23.268 "rw_mbytes_per_sec": 0, 00:17:23.268 "r_mbytes_per_sec": 0, 00:17:23.268 "w_mbytes_per_sec": 0 00:17:23.268 }, 00:17:23.268 "claimed": false, 00:17:23.268 "zoned": false, 00:17:23.268 "supported_io_types": { 00:17:23.268 "read": true, 00:17:23.268 "write": true, 00:17:23.268 "unmap": false, 00:17:23.268 "flush": false, 00:17:23.268 "reset": true, 00:17:23.268 "nvme_admin": false, 00:17:23.268 "nvme_io": false, 00:17:23.268 "nvme_io_md": false, 00:17:23.268 "write_zeroes": true, 00:17:23.268 "zcopy": false, 00:17:23.268 "get_zone_info": false, 00:17:23.268 "zone_management": false, 00:17:23.268 "zone_append": false, 00:17:23.268 "compare": false, 00:17:23.268 "compare_and_write": false, 00:17:23.268 "abort": false, 00:17:23.268 "seek_hole": false, 00:17:23.268 "seek_data": false, 00:17:23.268 "copy": false, 00:17:23.268 "nvme_iov_md": false 00:17:23.268 }, 00:17:23.268 "memory_domains": [ 00:17:23.268 { 00:17:23.268 "dma_device_id": "system", 00:17:23.268 "dma_device_type": 1 00:17:23.268 }, 00:17:23.268 { 00:17:23.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.268 "dma_device_type": 2 00:17:23.268 }, 00:17:23.268 { 00:17:23.268 "dma_device_id": "system", 00:17:23.268 "dma_device_type": 1 00:17:23.268 }, 00:17:23.268 { 00:17:23.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.268 "dma_device_type": 2 00:17:23.268 } 00:17:23.268 ], 00:17:23.268 "driver_specific": { 00:17:23.269 "raid": { 00:17:23.269 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:23.269 "strip_size_kb": 0, 00:17:23.269 "state": "online", 00:17:23.269 "raid_level": "raid1", 00:17:23.269 "superblock": true, 00:17:23.269 "num_base_bdevs": 2, 00:17:23.269 "num_base_bdevs_discovered": 2, 00:17:23.269 "num_base_bdevs_operational": 2, 00:17:23.269 "base_bdevs_list": [ 00:17:23.269 { 00:17:23.269 "name": "pt1", 00:17:23.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.269 "is_configured": true, 00:17:23.269 "data_offset": 256, 00:17:23.269 "data_size": 7936 00:17:23.269 }, 00:17:23.269 { 00:17:23.269 "name": "pt2", 00:17:23.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.269 "is_configured": true, 00:17:23.269 "data_offset": 256, 00:17:23.269 "data_size": 7936 00:17:23.269 } 00:17:23.269 ] 00:17:23.269 } 00:17:23.269 } 00:17:23.269 }' 00:17:23.269 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:23.269 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:23.269 pt2' 00:17:23.269 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:23.528 [2024-11-18 10:45:49.270018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a487091d-26b8-4235-9458-5970afc3709d 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a487091d-26b8-4235-9458-5970afc3709d ']' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 [2024-11-18 10:45:49.317710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.528 [2024-11-18 10:45:49.317732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.528 [2024-11-18 10:45:49.317799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.528 [2024-11-18 10:45:49.317844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.528 [2024-11-18 10:45:49.317855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.528 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.787 [2024-11-18 10:45:49.461499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:23.787 [2024-11-18 10:45:49.463173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:23.787 [2024-11-18 10:45:49.463253] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:23.787 [2024-11-18 10:45:49.463293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:23.787 [2024-11-18 10:45:49.463305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.787 [2024-11-18 10:45:49.463314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:23.787 request: 00:17:23.787 { 00:17:23.787 "name": "raid_bdev1", 00:17:23.787 "raid_level": "raid1", 00:17:23.787 "base_bdevs": [ 00:17:23.787 "malloc1", 00:17:23.787 "malloc2" 00:17:23.787 ], 00:17:23.787 "superblock": false, 00:17:23.787 "method": "bdev_raid_create", 00:17:23.787 "req_id": 1 00:17:23.787 } 00:17:23.787 Got JSON-RPC error response 00:17:23.787 response: 00:17:23.787 { 00:17:23.787 "code": -17, 00:17:23.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:23.787 } 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.787 [2024-11-18 10:45:49.529348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.787 [2024-11-18 10:45:49.529433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.787 [2024-11-18 10:45:49.529462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:23.787 [2024-11-18 10:45:49.529490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.787 [2024-11-18 10:45:49.531320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.787 [2024-11-18 10:45:49.531400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.787 [2024-11-18 10:45:49.531458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:23.787 [2024-11-18 10:45:49.531519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.787 pt1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.787 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.787 "name": "raid_bdev1", 00:17:23.787 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:23.787 "strip_size_kb": 0, 00:17:23.787 "state": "configuring", 00:17:23.787 "raid_level": "raid1", 00:17:23.787 "superblock": true, 00:17:23.787 "num_base_bdevs": 2, 00:17:23.787 "num_base_bdevs_discovered": 1, 00:17:23.787 "num_base_bdevs_operational": 2, 00:17:23.787 "base_bdevs_list": [ 00:17:23.787 { 00:17:23.788 "name": "pt1", 00:17:23.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:23.788 "is_configured": true, 00:17:23.788 "data_offset": 256, 00:17:23.788 "data_size": 7936 00:17:23.788 }, 00:17:23.788 { 00:17:23.788 "name": null, 00:17:23.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.788 "is_configured": false, 00:17:23.788 "data_offset": 256, 00:17:23.788 "data_size": 7936 00:17:23.788 } 00:17:23.788 ] 00:17:23.788 }' 00:17:23.788 10:45:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.788 10:45:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.355 [2024-11-18 10:45:50.016489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.355 [2024-11-18 10:45:50.016545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.355 [2024-11-18 10:45:50.016573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:24.355 [2024-11-18 10:45:50.016582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.355 [2024-11-18 10:45:50.016727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.355 [2024-11-18 10:45:50.016747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.355 [2024-11-18 10:45:50.016781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:24.355 [2024-11-18 10:45:50.016814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.355 [2024-11-18 10:45:50.016904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:24.355 [2024-11-18 10:45:50.016938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.355 [2024-11-18 10:45:50.016997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:24.355 [2024-11-18 10:45:50.017100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:24.355 [2024-11-18 10:45:50.017107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:24.355 [2024-11-18 10:45:50.017209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.355 pt2 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.355 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.355 "name": "raid_bdev1", 00:17:24.355 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:24.355 "strip_size_kb": 0, 00:17:24.355 "state": "online", 00:17:24.355 "raid_level": "raid1", 00:17:24.356 "superblock": true, 00:17:24.356 "num_base_bdevs": 2, 00:17:24.356 "num_base_bdevs_discovered": 2, 00:17:24.356 "num_base_bdevs_operational": 2, 00:17:24.356 "base_bdevs_list": [ 00:17:24.356 { 00:17:24.356 "name": "pt1", 00:17:24.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.356 "is_configured": true, 00:17:24.356 "data_offset": 256, 00:17:24.356 "data_size": 7936 00:17:24.356 }, 00:17:24.356 { 00:17:24.356 "name": "pt2", 00:17:24.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.356 "is_configured": true, 00:17:24.356 "data_offset": 256, 00:17:24.356 "data_size": 7936 00:17:24.356 } 00:17:24.356 ] 00:17:24.356 }' 00:17:24.356 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.356 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.615 [2024-11-18 10:45:50.447963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:24.615 "name": "raid_bdev1", 00:17:24.615 "aliases": [ 00:17:24.615 "a487091d-26b8-4235-9458-5970afc3709d" 00:17:24.615 ], 00:17:24.615 "product_name": "Raid Volume", 00:17:24.615 "block_size": 4096, 00:17:24.615 "num_blocks": 7936, 00:17:24.615 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:24.615 "md_size": 32, 00:17:24.615 "md_interleave": false, 00:17:24.615 "dif_type": 0, 00:17:24.615 "assigned_rate_limits": { 00:17:24.615 "rw_ios_per_sec": 0, 00:17:24.615 "rw_mbytes_per_sec": 0, 00:17:24.615 "r_mbytes_per_sec": 0, 00:17:24.615 "w_mbytes_per_sec": 0 00:17:24.615 }, 00:17:24.615 "claimed": false, 00:17:24.615 "zoned": false, 00:17:24.615 "supported_io_types": { 00:17:24.615 "read": true, 00:17:24.615 "write": true, 00:17:24.615 "unmap": false, 00:17:24.615 "flush": false, 00:17:24.615 "reset": true, 00:17:24.615 "nvme_admin": false, 00:17:24.615 "nvme_io": false, 00:17:24.615 "nvme_io_md": false, 00:17:24.615 "write_zeroes": true, 00:17:24.615 "zcopy": false, 00:17:24.615 "get_zone_info": false, 00:17:24.615 "zone_management": false, 00:17:24.615 "zone_append": false, 00:17:24.615 "compare": false, 00:17:24.615 "compare_and_write": false, 00:17:24.615 "abort": false, 00:17:24.615 "seek_hole": false, 00:17:24.615 "seek_data": false, 00:17:24.615 "copy": false, 00:17:24.615 "nvme_iov_md": false 00:17:24.615 }, 00:17:24.615 "memory_domains": [ 00:17:24.615 { 00:17:24.615 "dma_device_id": "system", 00:17:24.615 "dma_device_type": 1 00:17:24.615 }, 00:17:24.615 { 00:17:24.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.615 "dma_device_type": 2 00:17:24.615 }, 00:17:24.615 { 00:17:24.615 "dma_device_id": "system", 00:17:24.615 "dma_device_type": 1 00:17:24.615 }, 00:17:24.615 { 00:17:24.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.615 "dma_device_type": 2 00:17:24.615 } 00:17:24.615 ], 00:17:24.615 "driver_specific": { 00:17:24.615 "raid": { 00:17:24.615 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:24.615 "strip_size_kb": 0, 00:17:24.615 "state": "online", 00:17:24.615 "raid_level": "raid1", 00:17:24.615 "superblock": true, 00:17:24.615 "num_base_bdevs": 2, 00:17:24.615 "num_base_bdevs_discovered": 2, 00:17:24.615 "num_base_bdevs_operational": 2, 00:17:24.615 "base_bdevs_list": [ 00:17:24.615 { 00:17:24.615 "name": "pt1", 00:17:24.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.615 "is_configured": true, 00:17:24.615 "data_offset": 256, 00:17:24.615 "data_size": 7936 00:17:24.615 }, 00:17:24.615 { 00:17:24.615 "name": "pt2", 00:17:24.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.615 "is_configured": true, 00:17:24.615 "data_offset": 256, 00:17:24.615 "data_size": 7936 00:17:24.615 } 00:17:24.615 ] 00:17:24.615 } 00:17:24.615 } 00:17:24.615 }' 00:17:24.615 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:24.876 pt2' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:24.876 [2024-11-18 10:45:50.675622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a487091d-26b8-4235-9458-5970afc3709d '!=' a487091d-26b8-4235-9458-5970afc3709d ']' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.876 [2024-11-18 10:45:50.727340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.876 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.136 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.136 "name": "raid_bdev1", 00:17:25.136 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:25.136 "strip_size_kb": 0, 00:17:25.136 "state": "online", 00:17:25.136 "raid_level": "raid1", 00:17:25.136 "superblock": true, 00:17:25.136 "num_base_bdevs": 2, 00:17:25.136 "num_base_bdevs_discovered": 1, 00:17:25.136 "num_base_bdevs_operational": 1, 00:17:25.136 "base_bdevs_list": [ 00:17:25.136 { 00:17:25.136 "name": null, 00:17:25.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.136 "is_configured": false, 00:17:25.136 "data_offset": 0, 00:17:25.136 "data_size": 7936 00:17:25.136 }, 00:17:25.136 { 00:17:25.136 "name": "pt2", 00:17:25.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.136 "is_configured": true, 00:17:25.136 "data_offset": 256, 00:17:25.136 "data_size": 7936 00:17:25.136 } 00:17:25.136 ] 00:17:25.136 }' 00:17:25.136 10:45:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.136 10:45:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 [2024-11-18 10:45:51.210461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.396 [2024-11-18 10:45:51.210522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.396 [2024-11-18 10:45:51.210576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.396 [2024-11-18 10:45:51.210610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.396 [2024-11-18 10:45:51.210620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.396 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:25.656 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:25.656 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:25.656 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:25.656 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:25.656 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 [2024-11-18 10:45:51.286340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.657 [2024-11-18 10:45:51.286428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.657 [2024-11-18 10:45:51.286471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:25.657 [2024-11-18 10:45:51.286500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.657 [2024-11-18 10:45:51.288405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.657 [2024-11-18 10:45:51.288478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.657 [2024-11-18 10:45:51.288548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:25.657 [2024-11-18 10:45:51.288626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.657 [2024-11-18 10:45:51.288727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:25.657 [2024-11-18 10:45:51.288766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.657 [2024-11-18 10:45:51.288848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:25.657 [2024-11-18 10:45:51.288980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:25.657 [2024-11-18 10:45:51.289016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:25.657 [2024-11-18 10:45:51.289141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.657 pt2 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.657 "name": "raid_bdev1", 00:17:25.657 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:25.657 "strip_size_kb": 0, 00:17:25.657 "state": "online", 00:17:25.657 "raid_level": "raid1", 00:17:25.657 "superblock": true, 00:17:25.657 "num_base_bdevs": 2, 00:17:25.657 "num_base_bdevs_discovered": 1, 00:17:25.657 "num_base_bdevs_operational": 1, 00:17:25.657 "base_bdevs_list": [ 00:17:25.657 { 00:17:25.657 "name": null, 00:17:25.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.657 "is_configured": false, 00:17:25.657 "data_offset": 256, 00:17:25.657 "data_size": 7936 00:17:25.657 }, 00:17:25.657 { 00:17:25.657 "name": "pt2", 00:17:25.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.657 "is_configured": true, 00:17:25.657 "data_offset": 256, 00:17:25.657 "data_size": 7936 00:17:25.657 } 00:17:25.657 ] 00:17:25.657 }' 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.657 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.917 [2024-11-18 10:45:51.737527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.917 [2024-11-18 10:45:51.737549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.917 [2024-11-18 10:45:51.737592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.917 [2024-11-18 10:45:51.737625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.917 [2024-11-18 10:45:51.737633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.917 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.178 [2024-11-18 10:45:51.801456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:26.178 [2024-11-18 10:45:51.801512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.178 [2024-11-18 10:45:51.801527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:26.178 [2024-11-18 10:45:51.801534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.178 [2024-11-18 10:45:51.803421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.178 [2024-11-18 10:45:51.803457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:26.178 [2024-11-18 10:45:51.803499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:26.178 [2024-11-18 10:45:51.803540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:26.178 [2024-11-18 10:45:51.803644] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:26.178 [2024-11-18 10:45:51.803653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.178 [2024-11-18 10:45:51.803667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:26.178 [2024-11-18 10:45:51.803736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.178 [2024-11-18 10:45:51.803793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:26.178 [2024-11-18 10:45:51.803800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:26.178 [2024-11-18 10:45:51.803861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:26.178 [2024-11-18 10:45:51.803958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:26.178 [2024-11-18 10:45:51.803977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:26.178 [2024-11-18 10:45:51.804063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.178 pt1 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.178 "name": "raid_bdev1", 00:17:26.178 "uuid": "a487091d-26b8-4235-9458-5970afc3709d", 00:17:26.178 "strip_size_kb": 0, 00:17:26.178 "state": "online", 00:17:26.178 "raid_level": "raid1", 00:17:26.178 "superblock": true, 00:17:26.178 "num_base_bdevs": 2, 00:17:26.178 "num_base_bdevs_discovered": 1, 00:17:26.178 "num_base_bdevs_operational": 1, 00:17:26.178 "base_bdevs_list": [ 00:17:26.178 { 00:17:26.178 "name": null, 00:17:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.178 "is_configured": false, 00:17:26.178 "data_offset": 256, 00:17:26.178 "data_size": 7936 00:17:26.178 }, 00:17:26.178 { 00:17:26.178 "name": "pt2", 00:17:26.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.178 "is_configured": true, 00:17:26.178 "data_offset": 256, 00:17:26.178 "data_size": 7936 00:17:26.178 } 00:17:26.178 ] 00:17:26.178 }' 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.178 10:45:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.439 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.439 [2024-11-18 10:45:52.312756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a487091d-26b8-4235-9458-5970afc3709d '!=' a487091d-26b8-4235-9458-5970afc3709d ']' 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87237 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87237 ']' 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87237 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87237 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87237' 00:17:26.699 killing process with pid 87237 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87237 00:17:26.699 [2024-11-18 10:45:52.384571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.699 [2024-11-18 10:45:52.384633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.699 [2024-11-18 10:45:52.384666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.699 [2024-11-18 10:45:52.384680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:26.699 10:45:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87237 00:17:26.959 [2024-11-18 10:45:52.592544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.901 ************************************ 00:17:27.901 END TEST raid_superblock_test_md_separate 00:17:27.901 ************************************ 00:17:27.901 10:45:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:27.901 00:17:27.901 real 0m6.107s 00:17:27.901 user 0m9.291s 00:17:27.901 sys 0m1.143s 00:17:27.901 10:45:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.901 10:45:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.901 10:45:53 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:27.901 10:45:53 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:27.901 10:45:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:27.901 10:45:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.901 10:45:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.901 ************************************ 00:17:27.901 START TEST raid_rebuild_test_sb_md_separate 00:17:27.901 ************************************ 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87565 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87565 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87565 ']' 00:17:27.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.901 10:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.162 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:28.162 Zero copy mechanism will not be used. 00:17:28.162 [2024-11-18 10:45:53.814636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:28.162 [2024-11-18 10:45:53.814755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87565 ] 00:17:28.162 [2024-11-18 10:45:53.994742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.422 [2024-11-18 10:45:54.103570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.422 [2024-11-18 10:45:54.294602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.422 [2024-11-18 10:45:54.294640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.992 BaseBdev1_malloc 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.992 [2024-11-18 10:45:54.656965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.992 [2024-11-18 10:45:54.657029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.992 [2024-11-18 10:45:54.657054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.992 [2024-11-18 10:45:54.657064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.992 [2024-11-18 10:45:54.658854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.992 [2024-11-18 10:45:54.658975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.992 BaseBdev1 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.992 BaseBdev2_malloc 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.992 [2024-11-18 10:45:54.712716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:28.992 [2024-11-18 10:45:54.712846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.992 [2024-11-18 10:45:54.712870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.992 [2024-11-18 10:45:54.712880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.992 [2024-11-18 10:45:54.714670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.992 [2024-11-18 10:45:54.714710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.992 BaseBdev2 00:17:28.992 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 spare_malloc 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 spare_delay 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 [2024-11-18 10:45:54.809726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.993 [2024-11-18 10:45:54.809853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.993 [2024-11-18 10:45:54.809877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:28.993 [2024-11-18 10:45:54.809888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.993 [2024-11-18 10:45:54.811720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.993 [2024-11-18 10:45:54.811761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.993 spare 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 [2024-11-18 10:45:54.821746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.993 [2024-11-18 10:45:54.823526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.993 [2024-11-18 10:45:54.823751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.993 [2024-11-18 10:45:54.823797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.993 [2024-11-18 10:45:54.823879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:28.993 [2024-11-18 10:45:54.824031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.993 [2024-11-18 10:45:54.824067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.993 [2024-11-18 10:45:54.824208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.993 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.252 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.252 "name": "raid_bdev1", 00:17:29.252 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:29.252 "strip_size_kb": 0, 00:17:29.252 "state": "online", 00:17:29.252 "raid_level": "raid1", 00:17:29.252 "superblock": true, 00:17:29.252 "num_base_bdevs": 2, 00:17:29.252 "num_base_bdevs_discovered": 2, 00:17:29.252 "num_base_bdevs_operational": 2, 00:17:29.252 "base_bdevs_list": [ 00:17:29.252 { 00:17:29.252 "name": "BaseBdev1", 00:17:29.252 "uuid": "d55efeb3-2de2-51cc-a253-ab59d8defc7a", 00:17:29.252 "is_configured": true, 00:17:29.252 "data_offset": 256, 00:17:29.252 "data_size": 7936 00:17:29.252 }, 00:17:29.252 { 00:17:29.252 "name": "BaseBdev2", 00:17:29.252 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:29.252 "is_configured": true, 00:17:29.252 "data_offset": 256, 00:17:29.252 "data_size": 7936 00:17:29.252 } 00:17:29.252 ] 00:17:29.252 }' 00:17:29.252 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.252 10:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.512 [2024-11-18 10:45:55.273199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:29.512 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.513 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:29.773 [2024-11-18 10:45:55.520534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:29.773 /dev/nbd0 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.773 1+0 records in 00:17:29.773 1+0 records out 00:17:29.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505015 s, 8.1 MB/s 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:29.773 10:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:30.711 7936+0 records in 00:17:30.711 7936+0 records out 00:17:30.711 32505856 bytes (33 MB, 31 MiB) copied, 0.641574 s, 50.7 MB/s 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.711 [2024-11-18 10:45:56.467288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.711 [2024-11-18 10:45:56.483365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.711 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.711 "name": "raid_bdev1", 00:17:30.711 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:30.711 "strip_size_kb": 0, 00:17:30.711 "state": "online", 00:17:30.711 "raid_level": "raid1", 00:17:30.711 "superblock": true, 00:17:30.711 "num_base_bdevs": 2, 00:17:30.711 "num_base_bdevs_discovered": 1, 00:17:30.711 "num_base_bdevs_operational": 1, 00:17:30.711 "base_bdevs_list": [ 00:17:30.711 { 00:17:30.711 "name": null, 00:17:30.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.712 "is_configured": false, 00:17:30.712 "data_offset": 0, 00:17:30.712 "data_size": 7936 00:17:30.712 }, 00:17:30.712 { 00:17:30.712 "name": "BaseBdev2", 00:17:30.712 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:30.712 "is_configured": true, 00:17:30.712 "data_offset": 256, 00:17:30.712 "data_size": 7936 00:17:30.712 } 00:17:30.712 ] 00:17:30.712 }' 00:17:30.712 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.712 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.283 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:31.283 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.283 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.283 [2024-11-18 10:45:56.938570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.283 [2024-11-18 10:45:56.952988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:31.283 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.283 10:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:31.283 [2024-11-18 10:45:56.954728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.224 10:45:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.224 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.224 "name": "raid_bdev1", 00:17:32.224 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:32.224 "strip_size_kb": 0, 00:17:32.224 "state": "online", 00:17:32.224 "raid_level": "raid1", 00:17:32.224 "superblock": true, 00:17:32.224 "num_base_bdevs": 2, 00:17:32.224 "num_base_bdevs_discovered": 2, 00:17:32.224 "num_base_bdevs_operational": 2, 00:17:32.224 "process": { 00:17:32.224 "type": "rebuild", 00:17:32.224 "target": "spare", 00:17:32.224 "progress": { 00:17:32.224 "blocks": 2560, 00:17:32.224 "percent": 32 00:17:32.224 } 00:17:32.224 }, 00:17:32.224 "base_bdevs_list": [ 00:17:32.224 { 00:17:32.224 "name": "spare", 00:17:32.224 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:32.224 "is_configured": true, 00:17:32.224 "data_offset": 256, 00:17:32.224 "data_size": 7936 00:17:32.224 }, 00:17:32.224 { 00:17:32.224 "name": "BaseBdev2", 00:17:32.224 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:32.224 "is_configured": true, 00:17:32.224 "data_offset": 256, 00:17:32.224 "data_size": 7936 00:17:32.224 } 00:17:32.224 ] 00:17:32.224 }' 00:17:32.224 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.224 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.224 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.484 [2024-11-18 10:45:58.114591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.484 [2024-11-18 10:45:58.159600] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:32.484 [2024-11-18 10:45:58.159657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.484 [2024-11-18 10:45:58.159670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.484 [2024-11-18 10:45:58.159679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.484 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.485 "name": "raid_bdev1", 00:17:32.485 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:32.485 "strip_size_kb": 0, 00:17:32.485 "state": "online", 00:17:32.485 "raid_level": "raid1", 00:17:32.485 "superblock": true, 00:17:32.485 "num_base_bdevs": 2, 00:17:32.485 "num_base_bdevs_discovered": 1, 00:17:32.485 "num_base_bdevs_operational": 1, 00:17:32.485 "base_bdevs_list": [ 00:17:32.485 { 00:17:32.485 "name": null, 00:17:32.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.485 "is_configured": false, 00:17:32.485 "data_offset": 0, 00:17:32.485 "data_size": 7936 00:17:32.485 }, 00:17:32.485 { 00:17:32.485 "name": "BaseBdev2", 00:17:32.485 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:32.485 "is_configured": true, 00:17:32.485 "data_offset": 256, 00:17:32.485 "data_size": 7936 00:17:32.485 } 00:17:32.485 ] 00:17:32.485 }' 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.485 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.745 "name": "raid_bdev1", 00:17:32.745 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:32.745 "strip_size_kb": 0, 00:17:32.745 "state": "online", 00:17:32.745 "raid_level": "raid1", 00:17:32.745 "superblock": true, 00:17:32.745 "num_base_bdevs": 2, 00:17:32.745 "num_base_bdevs_discovered": 1, 00:17:32.745 "num_base_bdevs_operational": 1, 00:17:32.745 "base_bdevs_list": [ 00:17:32.745 { 00:17:32.745 "name": null, 00:17:32.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.745 "is_configured": false, 00:17:32.745 "data_offset": 0, 00:17:32.745 "data_size": 7936 00:17:32.745 }, 00:17:32.745 { 00:17:32.745 "name": "BaseBdev2", 00:17:32.745 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:32.745 "is_configured": true, 00:17:32.745 "data_offset": 256, 00:17:32.745 "data_size": 7936 00:17:32.745 } 00:17:32.745 ] 00:17:32.745 }' 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.745 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.006 [2024-11-18 10:45:58.686116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.006 [2024-11-18 10:45:58.699593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.006 10:45:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:33.006 [2024-11-18 10:45:58.701351] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.946 "name": "raid_bdev1", 00:17:33.946 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:33.946 "strip_size_kb": 0, 00:17:33.946 "state": "online", 00:17:33.946 "raid_level": "raid1", 00:17:33.946 "superblock": true, 00:17:33.946 "num_base_bdevs": 2, 00:17:33.946 "num_base_bdevs_discovered": 2, 00:17:33.946 "num_base_bdevs_operational": 2, 00:17:33.946 "process": { 00:17:33.946 "type": "rebuild", 00:17:33.946 "target": "spare", 00:17:33.946 "progress": { 00:17:33.946 "blocks": 2560, 00:17:33.946 "percent": 32 00:17:33.946 } 00:17:33.946 }, 00:17:33.946 "base_bdevs_list": [ 00:17:33.946 { 00:17:33.946 "name": "spare", 00:17:33.946 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:33.946 "is_configured": true, 00:17:33.946 "data_offset": 256, 00:17:33.946 "data_size": 7936 00:17:33.946 }, 00:17:33.946 { 00:17:33.946 "name": "BaseBdev2", 00:17:33.946 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:33.946 "is_configured": true, 00:17:33.946 "data_offset": 256, 00:17:33.946 "data_size": 7936 00:17:33.946 } 00:17:33.946 ] 00:17:33.946 }' 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.946 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:34.206 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=701 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.206 "name": "raid_bdev1", 00:17:34.206 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:34.206 "strip_size_kb": 0, 00:17:34.206 "state": "online", 00:17:34.206 "raid_level": "raid1", 00:17:34.206 "superblock": true, 00:17:34.206 "num_base_bdevs": 2, 00:17:34.206 "num_base_bdevs_discovered": 2, 00:17:34.206 "num_base_bdevs_operational": 2, 00:17:34.206 "process": { 00:17:34.206 "type": "rebuild", 00:17:34.206 "target": "spare", 00:17:34.206 "progress": { 00:17:34.206 "blocks": 2816, 00:17:34.206 "percent": 35 00:17:34.206 } 00:17:34.206 }, 00:17:34.206 "base_bdevs_list": [ 00:17:34.206 { 00:17:34.206 "name": "spare", 00:17:34.206 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:34.206 "is_configured": true, 00:17:34.206 "data_offset": 256, 00:17:34.206 "data_size": 7936 00:17:34.206 }, 00:17:34.206 { 00:17:34.206 "name": "BaseBdev2", 00:17:34.206 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:34.206 "is_configured": true, 00:17:34.206 "data_offset": 256, 00:17:34.206 "data_size": 7936 00:17:34.206 } 00:17:34.206 ] 00:17:34.206 }' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.206 10:45:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.147 10:46:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.147 10:46:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.147 10:46:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.147 10:46:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.147 10:46:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.147 10:46:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.147 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.147 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.147 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.147 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.147 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.408 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.408 "name": "raid_bdev1", 00:17:35.408 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:35.408 "strip_size_kb": 0, 00:17:35.408 "state": "online", 00:17:35.408 "raid_level": "raid1", 00:17:35.408 "superblock": true, 00:17:35.408 "num_base_bdevs": 2, 00:17:35.408 "num_base_bdevs_discovered": 2, 00:17:35.408 "num_base_bdevs_operational": 2, 00:17:35.408 "process": { 00:17:35.408 "type": "rebuild", 00:17:35.408 "target": "spare", 00:17:35.408 "progress": { 00:17:35.408 "blocks": 5888, 00:17:35.408 "percent": 74 00:17:35.408 } 00:17:35.408 }, 00:17:35.408 "base_bdevs_list": [ 00:17:35.408 { 00:17:35.408 "name": "spare", 00:17:35.408 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:35.408 "is_configured": true, 00:17:35.408 "data_offset": 256, 00:17:35.408 "data_size": 7936 00:17:35.408 }, 00:17:35.408 { 00:17:35.408 "name": "BaseBdev2", 00:17:35.408 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:35.408 "is_configured": true, 00:17:35.408 "data_offset": 256, 00:17:35.408 "data_size": 7936 00:17:35.408 } 00:17:35.408 ] 00:17:35.408 }' 00:17:35.408 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.408 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.408 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.408 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.408 10:46:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.979 [2024-11-18 10:46:01.813065] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:35.979 [2024-11-18 10:46:01.813196] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:35.979 [2024-11-18 10:46:01.813329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.549 "name": "raid_bdev1", 00:17:36.549 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:36.549 "strip_size_kb": 0, 00:17:36.549 "state": "online", 00:17:36.549 "raid_level": "raid1", 00:17:36.549 "superblock": true, 00:17:36.549 "num_base_bdevs": 2, 00:17:36.549 "num_base_bdevs_discovered": 2, 00:17:36.549 "num_base_bdevs_operational": 2, 00:17:36.549 "base_bdevs_list": [ 00:17:36.549 { 00:17:36.549 "name": "spare", 00:17:36.549 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:36.549 "is_configured": true, 00:17:36.549 "data_offset": 256, 00:17:36.549 "data_size": 7936 00:17:36.549 }, 00:17:36.549 { 00:17:36.549 "name": "BaseBdev2", 00:17:36.549 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:36.549 "is_configured": true, 00:17:36.549 "data_offset": 256, 00:17:36.549 "data_size": 7936 00:17:36.549 } 00:17:36.549 ] 00:17:36.549 }' 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.549 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.550 "name": "raid_bdev1", 00:17:36.550 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:36.550 "strip_size_kb": 0, 00:17:36.550 "state": "online", 00:17:36.550 "raid_level": "raid1", 00:17:36.550 "superblock": true, 00:17:36.550 "num_base_bdevs": 2, 00:17:36.550 "num_base_bdevs_discovered": 2, 00:17:36.550 "num_base_bdevs_operational": 2, 00:17:36.550 "base_bdevs_list": [ 00:17:36.550 { 00:17:36.550 "name": "spare", 00:17:36.550 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:36.550 "is_configured": true, 00:17:36.550 "data_offset": 256, 00:17:36.550 "data_size": 7936 00:17:36.550 }, 00:17:36.550 { 00:17:36.550 "name": "BaseBdev2", 00:17:36.550 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:36.550 "is_configured": true, 00:17:36.550 "data_offset": 256, 00:17:36.550 "data_size": 7936 00:17:36.550 } 00:17:36.550 ] 00:17:36.550 }' 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.550 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.809 "name": "raid_bdev1", 00:17:36.809 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:36.809 "strip_size_kb": 0, 00:17:36.809 "state": "online", 00:17:36.809 "raid_level": "raid1", 00:17:36.809 "superblock": true, 00:17:36.809 "num_base_bdevs": 2, 00:17:36.809 "num_base_bdevs_discovered": 2, 00:17:36.809 "num_base_bdevs_operational": 2, 00:17:36.809 "base_bdevs_list": [ 00:17:36.809 { 00:17:36.809 "name": "spare", 00:17:36.809 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:36.809 "is_configured": true, 00:17:36.809 "data_offset": 256, 00:17:36.809 "data_size": 7936 00:17:36.809 }, 00:17:36.809 { 00:17:36.809 "name": "BaseBdev2", 00:17:36.809 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:36.809 "is_configured": true, 00:17:36.809 "data_offset": 256, 00:17:36.809 "data_size": 7936 00:17:36.809 } 00:17:36.809 ] 00:17:36.809 }' 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.809 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.068 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.068 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.068 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.069 [2024-11-18 10:46:02.886843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.069 [2024-11-18 10:46:02.886916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.069 [2024-11-18 10:46:02.887011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.069 [2024-11-18 10:46:02.887085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.069 [2024-11-18 10:46:02.887139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:37.069 10:46:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:37.329 /dev/nbd0 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.329 1+0 records in 00:17:37.329 1+0 records out 00:17:37.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436348 s, 9.4 MB/s 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:37.329 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:37.589 /dev/nbd1 00:17:37.589 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:37.589 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:37.589 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.590 1+0 records in 00:17:37.590 1+0 records out 00:17:37.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421692 s, 9.7 MB/s 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:37.590 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.849 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.109 10:46:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.370 [2024-11-18 10:46:04.064402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.370 [2024-11-18 10:46:04.064457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.370 [2024-11-18 10:46:04.064479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:38.370 [2024-11-18 10:46:04.064489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.370 [2024-11-18 10:46:04.066366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.370 [2024-11-18 10:46:04.066401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.370 [2024-11-18 10:46:04.066453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:38.370 [2024-11-18 10:46:04.066513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.370 [2024-11-18 10:46:04.066636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.370 spare 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.370 [2024-11-18 10:46:04.166511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:38.370 [2024-11-18 10:46:04.166539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:38.370 [2024-11-18 10:46:04.166629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:38.370 [2024-11-18 10:46:04.166746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:38.370 [2024-11-18 10:46:04.166754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:38.370 [2024-11-18 10:46:04.166867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.370 "name": "raid_bdev1", 00:17:38.370 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:38.370 "strip_size_kb": 0, 00:17:38.370 "state": "online", 00:17:38.370 "raid_level": "raid1", 00:17:38.370 "superblock": true, 00:17:38.370 "num_base_bdevs": 2, 00:17:38.370 "num_base_bdevs_discovered": 2, 00:17:38.370 "num_base_bdevs_operational": 2, 00:17:38.370 "base_bdevs_list": [ 00:17:38.370 { 00:17:38.370 "name": "spare", 00:17:38.370 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:38.370 "is_configured": true, 00:17:38.370 "data_offset": 256, 00:17:38.370 "data_size": 7936 00:17:38.370 }, 00:17:38.370 { 00:17:38.370 "name": "BaseBdev2", 00:17:38.370 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:38.370 "is_configured": true, 00:17:38.370 "data_offset": 256, 00:17:38.370 "data_size": 7936 00:17:38.370 } 00:17:38.370 ] 00:17:38.370 }' 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.370 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.940 "name": "raid_bdev1", 00:17:38.940 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:38.940 "strip_size_kb": 0, 00:17:38.940 "state": "online", 00:17:38.940 "raid_level": "raid1", 00:17:38.940 "superblock": true, 00:17:38.940 "num_base_bdevs": 2, 00:17:38.940 "num_base_bdevs_discovered": 2, 00:17:38.940 "num_base_bdevs_operational": 2, 00:17:38.940 "base_bdevs_list": [ 00:17:38.940 { 00:17:38.940 "name": "spare", 00:17:38.940 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:38.940 "is_configured": true, 00:17:38.940 "data_offset": 256, 00:17:38.940 "data_size": 7936 00:17:38.940 }, 00:17:38.940 { 00:17:38.940 "name": "BaseBdev2", 00:17:38.940 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:38.940 "is_configured": true, 00:17:38.940 "data_offset": 256, 00:17:38.940 "data_size": 7936 00:17:38.940 } 00:17:38.940 ] 00:17:38.940 }' 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 [2024-11-18 10:46:04.779237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.940 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.200 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.200 "name": "raid_bdev1", 00:17:39.200 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:39.200 "strip_size_kb": 0, 00:17:39.200 "state": "online", 00:17:39.200 "raid_level": "raid1", 00:17:39.200 "superblock": true, 00:17:39.200 "num_base_bdevs": 2, 00:17:39.200 "num_base_bdevs_discovered": 1, 00:17:39.200 "num_base_bdevs_operational": 1, 00:17:39.200 "base_bdevs_list": [ 00:17:39.200 { 00:17:39.200 "name": null, 00:17:39.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.200 "is_configured": false, 00:17:39.200 "data_offset": 0, 00:17:39.200 "data_size": 7936 00:17:39.200 }, 00:17:39.200 { 00:17:39.200 "name": "BaseBdev2", 00:17:39.200 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:39.200 "is_configured": true, 00:17:39.200 "data_offset": 256, 00:17:39.200 "data_size": 7936 00:17:39.200 } 00:17:39.200 ] 00:17:39.200 }' 00:17:39.200 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.200 10:46:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.465 10:46:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.465 10:46:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.465 10:46:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.465 [2024-11-18 10:46:05.238418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.465 [2024-11-18 10:46:05.238617] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.465 [2024-11-18 10:46:05.238697] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:39.465 [2024-11-18 10:46:05.238756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.465 [2024-11-18 10:46:05.251484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:39.465 10:46:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.465 10:46:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:39.465 [2024-11-18 10:46:05.253310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.421 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.681 "name": "raid_bdev1", 00:17:40.681 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:40.681 "strip_size_kb": 0, 00:17:40.681 "state": "online", 00:17:40.681 "raid_level": "raid1", 00:17:40.681 "superblock": true, 00:17:40.681 "num_base_bdevs": 2, 00:17:40.681 "num_base_bdevs_discovered": 2, 00:17:40.681 "num_base_bdevs_operational": 2, 00:17:40.681 "process": { 00:17:40.681 "type": "rebuild", 00:17:40.681 "target": "spare", 00:17:40.681 "progress": { 00:17:40.681 "blocks": 2560, 00:17:40.681 "percent": 32 00:17:40.681 } 00:17:40.681 }, 00:17:40.681 "base_bdevs_list": [ 00:17:40.681 { 00:17:40.681 "name": "spare", 00:17:40.681 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:40.681 "is_configured": true, 00:17:40.681 "data_offset": 256, 00:17:40.681 "data_size": 7936 00:17:40.681 }, 00:17:40.681 { 00:17:40.681 "name": "BaseBdev2", 00:17:40.681 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:40.681 "is_configured": true, 00:17:40.681 "data_offset": 256, 00:17:40.681 "data_size": 7936 00:17:40.681 } 00:17:40.681 ] 00:17:40.681 }' 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.681 [2024-11-18 10:46:06.405130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.681 [2024-11-18 10:46:06.458012] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:40.681 [2024-11-18 10:46:06.458065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.681 [2024-11-18 10:46:06.458079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.681 [2024-11-18 10:46:06.458097] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.681 "name": "raid_bdev1", 00:17:40.681 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:40.681 "strip_size_kb": 0, 00:17:40.681 "state": "online", 00:17:40.681 "raid_level": "raid1", 00:17:40.681 "superblock": true, 00:17:40.681 "num_base_bdevs": 2, 00:17:40.681 "num_base_bdevs_discovered": 1, 00:17:40.681 "num_base_bdevs_operational": 1, 00:17:40.681 "base_bdevs_list": [ 00:17:40.681 { 00:17:40.681 "name": null, 00:17:40.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.681 "is_configured": false, 00:17:40.681 "data_offset": 0, 00:17:40.681 "data_size": 7936 00:17:40.681 }, 00:17:40.681 { 00:17:40.681 "name": "BaseBdev2", 00:17:40.681 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:40.681 "is_configured": true, 00:17:40.681 "data_offset": 256, 00:17:40.681 "data_size": 7936 00:17:40.681 } 00:17:40.681 ] 00:17:40.681 }' 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.681 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.252 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:41.252 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.252 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.252 [2024-11-18 10:46:06.888624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:41.252 [2024-11-18 10:46:06.888724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.252 [2024-11-18 10:46:06.888765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:41.252 [2024-11-18 10:46:06.888797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.252 [2024-11-18 10:46:06.889032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.252 [2024-11-18 10:46:06.889084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:41.252 [2024-11-18 10:46:06.889155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:41.252 [2024-11-18 10:46:06.889226] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:41.252 [2024-11-18 10:46:06.889268] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:41.252 [2024-11-18 10:46:06.889330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.252 [2024-11-18 10:46:06.901652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:41.252 spare 00:17:41.252 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.252 [2024-11-18 10:46:06.903458] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.252 10:46:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.191 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.192 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.192 "name": "raid_bdev1", 00:17:42.192 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:42.192 "strip_size_kb": 0, 00:17:42.192 "state": "online", 00:17:42.192 "raid_level": "raid1", 00:17:42.192 "superblock": true, 00:17:42.192 "num_base_bdevs": 2, 00:17:42.192 "num_base_bdevs_discovered": 2, 00:17:42.192 "num_base_bdevs_operational": 2, 00:17:42.192 "process": { 00:17:42.192 "type": "rebuild", 00:17:42.192 "target": "spare", 00:17:42.192 "progress": { 00:17:42.192 "blocks": 2560, 00:17:42.192 "percent": 32 00:17:42.192 } 00:17:42.192 }, 00:17:42.192 "base_bdevs_list": [ 00:17:42.192 { 00:17:42.192 "name": "spare", 00:17:42.192 "uuid": "38a1de4f-ed54-5363-8e0d-81cc9efe6236", 00:17:42.192 "is_configured": true, 00:17:42.192 "data_offset": 256, 00:17:42.192 "data_size": 7936 00:17:42.192 }, 00:17:42.192 { 00:17:42.192 "name": "BaseBdev2", 00:17:42.192 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:42.192 "is_configured": true, 00:17:42.192 "data_offset": 256, 00:17:42.192 "data_size": 7936 00:17:42.192 } 00:17:42.192 ] 00:17:42.192 }' 00:17:42.192 10:46:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.192 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.192 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.192 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.192 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:42.192 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.192 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.192 [2024-11-18 10:46:08.067686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.452 [2024-11-18 10:46:08.108063] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.452 [2024-11-18 10:46:08.108116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.452 [2024-11-18 10:46:08.108133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.452 [2024-11-18 10:46:08.108140] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.452 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.452 "name": "raid_bdev1", 00:17:42.452 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:42.452 "strip_size_kb": 0, 00:17:42.452 "state": "online", 00:17:42.452 "raid_level": "raid1", 00:17:42.452 "superblock": true, 00:17:42.452 "num_base_bdevs": 2, 00:17:42.452 "num_base_bdevs_discovered": 1, 00:17:42.452 "num_base_bdevs_operational": 1, 00:17:42.452 "base_bdevs_list": [ 00:17:42.452 { 00:17:42.452 "name": null, 00:17:42.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.452 "is_configured": false, 00:17:42.452 "data_offset": 0, 00:17:42.452 "data_size": 7936 00:17:42.452 }, 00:17:42.452 { 00:17:42.452 "name": "BaseBdev2", 00:17:42.452 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:42.453 "is_configured": true, 00:17:42.453 "data_offset": 256, 00:17:42.453 "data_size": 7936 00:17:42.453 } 00:17:42.453 ] 00:17:42.453 }' 00:17:42.453 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.453 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.713 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.713 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.713 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.713 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.713 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.973 "name": "raid_bdev1", 00:17:42.973 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:42.973 "strip_size_kb": 0, 00:17:42.973 "state": "online", 00:17:42.973 "raid_level": "raid1", 00:17:42.973 "superblock": true, 00:17:42.973 "num_base_bdevs": 2, 00:17:42.973 "num_base_bdevs_discovered": 1, 00:17:42.973 "num_base_bdevs_operational": 1, 00:17:42.973 "base_bdevs_list": [ 00:17:42.973 { 00:17:42.973 "name": null, 00:17:42.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.973 "is_configured": false, 00:17:42.973 "data_offset": 0, 00:17:42.973 "data_size": 7936 00:17:42.973 }, 00:17:42.973 { 00:17:42.973 "name": "BaseBdev2", 00:17:42.973 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:42.973 "is_configured": true, 00:17:42.973 "data_offset": 256, 00:17:42.973 "data_size": 7936 00:17:42.973 } 00:17:42.973 ] 00:17:42.973 }' 00:17:42.973 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.974 [2024-11-18 10:46:08.741906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:42.974 [2024-11-18 10:46:08.741960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.974 [2024-11-18 10:46:08.741984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:42.974 [2024-11-18 10:46:08.741993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.974 [2024-11-18 10:46:08.742229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.974 [2024-11-18 10:46:08.742242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.974 [2024-11-18 10:46:08.742293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:42.974 [2024-11-18 10:46:08.742308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.974 [2024-11-18 10:46:08.742317] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:42.974 [2024-11-18 10:46:08.742326] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:42.974 BaseBdev1 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.974 10:46:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.913 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.174 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.174 "name": "raid_bdev1", 00:17:44.174 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:44.174 "strip_size_kb": 0, 00:17:44.174 "state": "online", 00:17:44.174 "raid_level": "raid1", 00:17:44.174 "superblock": true, 00:17:44.174 "num_base_bdevs": 2, 00:17:44.174 "num_base_bdevs_discovered": 1, 00:17:44.174 "num_base_bdevs_operational": 1, 00:17:44.174 "base_bdevs_list": [ 00:17:44.174 { 00:17:44.174 "name": null, 00:17:44.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.174 "is_configured": false, 00:17:44.174 "data_offset": 0, 00:17:44.174 "data_size": 7936 00:17:44.174 }, 00:17:44.174 { 00:17:44.174 "name": "BaseBdev2", 00:17:44.174 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:44.174 "is_configured": true, 00:17:44.174 "data_offset": 256, 00:17:44.174 "data_size": 7936 00:17:44.174 } 00:17:44.174 ] 00:17:44.174 }' 00:17:44.174 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.174 10:46:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.435 "name": "raid_bdev1", 00:17:44.435 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:44.435 "strip_size_kb": 0, 00:17:44.435 "state": "online", 00:17:44.435 "raid_level": "raid1", 00:17:44.435 "superblock": true, 00:17:44.435 "num_base_bdevs": 2, 00:17:44.435 "num_base_bdevs_discovered": 1, 00:17:44.435 "num_base_bdevs_operational": 1, 00:17:44.435 "base_bdevs_list": [ 00:17:44.435 { 00:17:44.435 "name": null, 00:17:44.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.435 "is_configured": false, 00:17:44.435 "data_offset": 0, 00:17:44.435 "data_size": 7936 00:17:44.435 }, 00:17:44.435 { 00:17:44.435 "name": "BaseBdev2", 00:17:44.435 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:44.435 "is_configured": true, 00:17:44.435 "data_offset": 256, 00:17:44.435 "data_size": 7936 00:17:44.435 } 00:17:44.435 ] 00:17:44.435 }' 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.435 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.695 [2024-11-18 10:46:10.319374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.695 [2024-11-18 10:46:10.319522] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.695 [2024-11-18 10:46:10.319539] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:44.696 request: 00:17:44.696 { 00:17:44.696 "base_bdev": "BaseBdev1", 00:17:44.696 "raid_bdev": "raid_bdev1", 00:17:44.696 "method": "bdev_raid_add_base_bdev", 00:17:44.696 "req_id": 1 00:17:44.696 } 00:17:44.696 Got JSON-RPC error response 00:17:44.696 response: 00:17:44.696 { 00:17:44.696 "code": -22, 00:17:44.696 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:44.696 } 00:17:44.696 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.696 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:44.696 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.696 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.696 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.696 10:46:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.637 "name": "raid_bdev1", 00:17:45.637 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:45.637 "strip_size_kb": 0, 00:17:45.637 "state": "online", 00:17:45.637 "raid_level": "raid1", 00:17:45.637 "superblock": true, 00:17:45.637 "num_base_bdevs": 2, 00:17:45.637 "num_base_bdevs_discovered": 1, 00:17:45.637 "num_base_bdevs_operational": 1, 00:17:45.637 "base_bdevs_list": [ 00:17:45.637 { 00:17:45.637 "name": null, 00:17:45.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.637 "is_configured": false, 00:17:45.637 "data_offset": 0, 00:17:45.637 "data_size": 7936 00:17:45.637 }, 00:17:45.637 { 00:17:45.637 "name": "BaseBdev2", 00:17:45.637 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:45.637 "is_configured": true, 00:17:45.637 "data_offset": 256, 00:17:45.637 "data_size": 7936 00:17:45.637 } 00:17:45.637 ] 00:17:45.637 }' 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.637 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.207 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.207 "name": "raid_bdev1", 00:17:46.207 "uuid": "bd80bf6b-648a-4f69-827b-7a9c07845214", 00:17:46.207 "strip_size_kb": 0, 00:17:46.207 "state": "online", 00:17:46.208 "raid_level": "raid1", 00:17:46.208 "superblock": true, 00:17:46.208 "num_base_bdevs": 2, 00:17:46.208 "num_base_bdevs_discovered": 1, 00:17:46.208 "num_base_bdevs_operational": 1, 00:17:46.208 "base_bdevs_list": [ 00:17:46.208 { 00:17:46.208 "name": null, 00:17:46.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.208 "is_configured": false, 00:17:46.208 "data_offset": 0, 00:17:46.208 "data_size": 7936 00:17:46.208 }, 00:17:46.208 { 00:17:46.208 "name": "BaseBdev2", 00:17:46.208 "uuid": "0387b2e8-26c3-5153-b2eb-158163eeb117", 00:17:46.208 "is_configured": true, 00:17:46.208 "data_offset": 256, 00:17:46.208 "data_size": 7936 00:17:46.208 } 00:17:46.208 ] 00:17:46.208 }' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87565 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87565 ']' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87565 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87565 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.208 killing process with pid 87565 00:17:46.208 Received shutdown signal, test time was about 60.000000 seconds 00:17:46.208 00:17:46.208 Latency(us) 00:17:46.208 [2024-11-18T10:46:12.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.208 [2024-11-18T10:46:12.093Z] =================================================================================================================== 00:17:46.208 [2024-11-18T10:46:12.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87565' 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87565 00:17:46.208 [2024-11-18 10:46:11.947495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.208 [2024-11-18 10:46:11.947614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.208 10:46:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87565 00:17:46.208 [2024-11-18 10:46:11.947661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.208 [2024-11-18 10:46:11.947673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:46.468 [2024-11-18 10:46:12.247294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.408 10:46:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:47.408 00:17:47.408 real 0m19.581s 00:17:47.408 user 0m25.432s 00:17:47.408 sys 0m2.715s 00:17:47.408 10:46:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.408 10:46:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.408 ************************************ 00:17:47.409 END TEST raid_rebuild_test_sb_md_separate 00:17:47.409 ************************************ 00:17:47.669 10:46:13 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:47.669 10:46:13 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:47.669 10:46:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:47.669 10:46:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.669 10:46:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.669 ************************************ 00:17:47.669 START TEST raid_state_function_test_sb_md_interleaved 00:17:47.669 ************************************ 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:47.669 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88252 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88252' 00:17:47.670 Process raid pid: 88252 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88252 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88252 ']' 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.670 10:46:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.670 [2024-11-18 10:46:13.470862] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:47.670 [2024-11-18 10:46:13.470983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.930 [2024-11-18 10:46:13.649745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.930 [2024-11-18 10:46:13.756733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.190 [2024-11-18 10:46:13.950510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.190 [2024-11-18 10:46:13.950546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.451 [2024-11-18 10:46:14.288285] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.451 [2024-11-18 10:46:14.288391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.451 [2024-11-18 10:46:14.288419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.451 [2024-11-18 10:46:14.288442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.451 "name": "Existed_Raid", 00:17:48.451 "uuid": "336a302c-dab4-4199-b361-4dffded12644", 00:17:48.451 "strip_size_kb": 0, 00:17:48.451 "state": "configuring", 00:17:48.451 "raid_level": "raid1", 00:17:48.451 "superblock": true, 00:17:48.451 "num_base_bdevs": 2, 00:17:48.451 "num_base_bdevs_discovered": 0, 00:17:48.451 "num_base_bdevs_operational": 2, 00:17:48.451 "base_bdevs_list": [ 00:17:48.451 { 00:17:48.451 "name": "BaseBdev1", 00:17:48.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.451 "is_configured": false, 00:17:48.451 "data_offset": 0, 00:17:48.451 "data_size": 0 00:17:48.451 }, 00:17:48.451 { 00:17:48.451 "name": "BaseBdev2", 00:17:48.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.451 "is_configured": false, 00:17:48.451 "data_offset": 0, 00:17:48.451 "data_size": 0 00:17:48.451 } 00:17:48.451 ] 00:17:48.451 }' 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.451 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 [2024-11-18 10:46:14.727449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.022 [2024-11-18 10:46:14.727526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 [2024-11-18 10:46:14.739448] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.022 [2024-11-18 10:46:14.739525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.022 [2024-11-18 10:46:14.739551] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.022 [2024-11-18 10:46:14.739574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 [2024-11-18 10:46:14.786129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.022 BaseBdev1 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:49.022 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.023 [ 00:17:49.023 { 00:17:49.023 "name": "BaseBdev1", 00:17:49.023 "aliases": [ 00:17:49.023 "aee7353b-b486-43c8-b4e2-3073e3347a0a" 00:17:49.023 ], 00:17:49.023 "product_name": "Malloc disk", 00:17:49.023 "block_size": 4128, 00:17:49.023 "num_blocks": 8192, 00:17:49.023 "uuid": "aee7353b-b486-43c8-b4e2-3073e3347a0a", 00:17:49.023 "md_size": 32, 00:17:49.023 "md_interleave": true, 00:17:49.023 "dif_type": 0, 00:17:49.023 "assigned_rate_limits": { 00:17:49.023 "rw_ios_per_sec": 0, 00:17:49.023 "rw_mbytes_per_sec": 0, 00:17:49.023 "r_mbytes_per_sec": 0, 00:17:49.023 "w_mbytes_per_sec": 0 00:17:49.023 }, 00:17:49.023 "claimed": true, 00:17:49.023 "claim_type": "exclusive_write", 00:17:49.023 "zoned": false, 00:17:49.023 "supported_io_types": { 00:17:49.023 "read": true, 00:17:49.023 "write": true, 00:17:49.023 "unmap": true, 00:17:49.023 "flush": true, 00:17:49.023 "reset": true, 00:17:49.023 "nvme_admin": false, 00:17:49.023 "nvme_io": false, 00:17:49.023 "nvme_io_md": false, 00:17:49.023 "write_zeroes": true, 00:17:49.023 "zcopy": true, 00:17:49.023 "get_zone_info": false, 00:17:49.023 "zone_management": false, 00:17:49.023 "zone_append": false, 00:17:49.023 "compare": false, 00:17:49.023 "compare_and_write": false, 00:17:49.023 "abort": true, 00:17:49.023 "seek_hole": false, 00:17:49.023 "seek_data": false, 00:17:49.023 "copy": true, 00:17:49.023 "nvme_iov_md": false 00:17:49.023 }, 00:17:49.023 "memory_domains": [ 00:17:49.023 { 00:17:49.023 "dma_device_id": "system", 00:17:49.023 "dma_device_type": 1 00:17:49.023 }, 00:17:49.023 { 00:17:49.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.023 "dma_device_type": 2 00:17:49.023 } 00:17:49.023 ], 00:17:49.023 "driver_specific": {} 00:17:49.023 } 00:17:49.023 ] 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.023 "name": "Existed_Raid", 00:17:49.023 "uuid": "27edf588-62eb-409b-91d7-345115b4634e", 00:17:49.023 "strip_size_kb": 0, 00:17:49.023 "state": "configuring", 00:17:49.023 "raid_level": "raid1", 00:17:49.023 "superblock": true, 00:17:49.023 "num_base_bdevs": 2, 00:17:49.023 "num_base_bdevs_discovered": 1, 00:17:49.023 "num_base_bdevs_operational": 2, 00:17:49.023 "base_bdevs_list": [ 00:17:49.023 { 00:17:49.023 "name": "BaseBdev1", 00:17:49.023 "uuid": "aee7353b-b486-43c8-b4e2-3073e3347a0a", 00:17:49.023 "is_configured": true, 00:17:49.023 "data_offset": 256, 00:17:49.023 "data_size": 7936 00:17:49.023 }, 00:17:49.023 { 00:17:49.023 "name": "BaseBdev2", 00:17:49.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.023 "is_configured": false, 00:17:49.023 "data_offset": 0, 00:17:49.023 "data_size": 0 00:17:49.023 } 00:17:49.023 ] 00:17:49.023 }' 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.023 10:46:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 [2024-11-18 10:46:15.193456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.594 [2024-11-18 10:46:15.193543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 [2024-11-18 10:46:15.205501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.594 [2024-11-18 10:46:15.207252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.594 [2024-11-18 10:46:15.207342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.594 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.595 "name": "Existed_Raid", 00:17:49.595 "uuid": "53e2fe82-2c48-4054-b43a-a668dfec8626", 00:17:49.595 "strip_size_kb": 0, 00:17:49.595 "state": "configuring", 00:17:49.595 "raid_level": "raid1", 00:17:49.595 "superblock": true, 00:17:49.595 "num_base_bdevs": 2, 00:17:49.595 "num_base_bdevs_discovered": 1, 00:17:49.595 "num_base_bdevs_operational": 2, 00:17:49.595 "base_bdevs_list": [ 00:17:49.595 { 00:17:49.595 "name": "BaseBdev1", 00:17:49.595 "uuid": "aee7353b-b486-43c8-b4e2-3073e3347a0a", 00:17:49.595 "is_configured": true, 00:17:49.595 "data_offset": 256, 00:17:49.595 "data_size": 7936 00:17:49.595 }, 00:17:49.595 { 00:17:49.595 "name": "BaseBdev2", 00:17:49.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.595 "is_configured": false, 00:17:49.595 "data_offset": 0, 00:17:49.595 "data_size": 0 00:17:49.595 } 00:17:49.595 ] 00:17:49.595 }' 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.595 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 [2024-11-18 10:46:15.669248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.855 [2024-11-18 10:46:15.669536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.855 [2024-11-18 10:46:15.669583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:49.855 [2024-11-18 10:46:15.669691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:49.855 [2024-11-18 10:46:15.669808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.855 [2024-11-18 10:46:15.669843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:49.855 [2024-11-18 10:46:15.669934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.855 BaseBdev2 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 [ 00:17:49.855 { 00:17:49.855 "name": "BaseBdev2", 00:17:49.855 "aliases": [ 00:17:49.856 "8cfc9d4e-1c83-4900-b2f7-7e769c14d868" 00:17:49.856 ], 00:17:49.856 "product_name": "Malloc disk", 00:17:49.856 "block_size": 4128, 00:17:49.856 "num_blocks": 8192, 00:17:49.856 "uuid": "8cfc9d4e-1c83-4900-b2f7-7e769c14d868", 00:17:49.856 "md_size": 32, 00:17:49.856 "md_interleave": true, 00:17:49.856 "dif_type": 0, 00:17:49.856 "assigned_rate_limits": { 00:17:49.856 "rw_ios_per_sec": 0, 00:17:49.856 "rw_mbytes_per_sec": 0, 00:17:49.856 "r_mbytes_per_sec": 0, 00:17:49.856 "w_mbytes_per_sec": 0 00:17:49.856 }, 00:17:49.856 "claimed": true, 00:17:49.856 "claim_type": "exclusive_write", 00:17:49.856 "zoned": false, 00:17:49.856 "supported_io_types": { 00:17:49.856 "read": true, 00:17:49.856 "write": true, 00:17:49.856 "unmap": true, 00:17:49.856 "flush": true, 00:17:49.856 "reset": true, 00:17:49.856 "nvme_admin": false, 00:17:49.856 "nvme_io": false, 00:17:49.856 "nvme_io_md": false, 00:17:49.856 "write_zeroes": true, 00:17:49.856 "zcopy": true, 00:17:49.856 "get_zone_info": false, 00:17:49.856 "zone_management": false, 00:17:49.856 "zone_append": false, 00:17:49.856 "compare": false, 00:17:49.856 "compare_and_write": false, 00:17:49.856 "abort": true, 00:17:49.856 "seek_hole": false, 00:17:49.856 "seek_data": false, 00:17:49.856 "copy": true, 00:17:49.856 "nvme_iov_md": false 00:17:49.856 }, 00:17:49.856 "memory_domains": [ 00:17:49.856 { 00:17:49.856 "dma_device_id": "system", 00:17:49.856 "dma_device_type": 1 00:17:49.856 }, 00:17:49.856 { 00:17:49.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.856 "dma_device_type": 2 00:17:49.856 } 00:17:49.856 ], 00:17:49.856 "driver_specific": {} 00:17:49.856 } 00:17:49.856 ] 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.856 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.116 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.116 "name": "Existed_Raid", 00:17:50.116 "uuid": "53e2fe82-2c48-4054-b43a-a668dfec8626", 00:17:50.116 "strip_size_kb": 0, 00:17:50.116 "state": "online", 00:17:50.116 "raid_level": "raid1", 00:17:50.116 "superblock": true, 00:17:50.116 "num_base_bdevs": 2, 00:17:50.116 "num_base_bdevs_discovered": 2, 00:17:50.116 "num_base_bdevs_operational": 2, 00:17:50.116 "base_bdevs_list": [ 00:17:50.116 { 00:17:50.116 "name": "BaseBdev1", 00:17:50.116 "uuid": "aee7353b-b486-43c8-b4e2-3073e3347a0a", 00:17:50.116 "is_configured": true, 00:17:50.116 "data_offset": 256, 00:17:50.116 "data_size": 7936 00:17:50.116 }, 00:17:50.116 { 00:17:50.116 "name": "BaseBdev2", 00:17:50.116 "uuid": "8cfc9d4e-1c83-4900-b2f7-7e769c14d868", 00:17:50.116 "is_configured": true, 00:17:50.116 "data_offset": 256, 00:17:50.116 "data_size": 7936 00:17:50.116 } 00:17:50.116 ] 00:17:50.116 }' 00:17:50.116 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.116 10:46:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.377 [2024-11-18 10:46:16.124779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.377 "name": "Existed_Raid", 00:17:50.377 "aliases": [ 00:17:50.377 "53e2fe82-2c48-4054-b43a-a668dfec8626" 00:17:50.377 ], 00:17:50.377 "product_name": "Raid Volume", 00:17:50.377 "block_size": 4128, 00:17:50.377 "num_blocks": 7936, 00:17:50.377 "uuid": "53e2fe82-2c48-4054-b43a-a668dfec8626", 00:17:50.377 "md_size": 32, 00:17:50.377 "md_interleave": true, 00:17:50.377 "dif_type": 0, 00:17:50.377 "assigned_rate_limits": { 00:17:50.377 "rw_ios_per_sec": 0, 00:17:50.377 "rw_mbytes_per_sec": 0, 00:17:50.377 "r_mbytes_per_sec": 0, 00:17:50.377 "w_mbytes_per_sec": 0 00:17:50.377 }, 00:17:50.377 "claimed": false, 00:17:50.377 "zoned": false, 00:17:50.377 "supported_io_types": { 00:17:50.377 "read": true, 00:17:50.377 "write": true, 00:17:50.377 "unmap": false, 00:17:50.377 "flush": false, 00:17:50.377 "reset": true, 00:17:50.377 "nvme_admin": false, 00:17:50.377 "nvme_io": false, 00:17:50.377 "nvme_io_md": false, 00:17:50.377 "write_zeroes": true, 00:17:50.377 "zcopy": false, 00:17:50.377 "get_zone_info": false, 00:17:50.377 "zone_management": false, 00:17:50.377 "zone_append": false, 00:17:50.377 "compare": false, 00:17:50.377 "compare_and_write": false, 00:17:50.377 "abort": false, 00:17:50.377 "seek_hole": false, 00:17:50.377 "seek_data": false, 00:17:50.377 "copy": false, 00:17:50.377 "nvme_iov_md": false 00:17:50.377 }, 00:17:50.377 "memory_domains": [ 00:17:50.377 { 00:17:50.377 "dma_device_id": "system", 00:17:50.377 "dma_device_type": 1 00:17:50.377 }, 00:17:50.377 { 00:17:50.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.377 "dma_device_type": 2 00:17:50.377 }, 00:17:50.377 { 00:17:50.377 "dma_device_id": "system", 00:17:50.377 "dma_device_type": 1 00:17:50.377 }, 00:17:50.377 { 00:17:50.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.377 "dma_device_type": 2 00:17:50.377 } 00:17:50.377 ], 00:17:50.377 "driver_specific": { 00:17:50.377 "raid": { 00:17:50.377 "uuid": "53e2fe82-2c48-4054-b43a-a668dfec8626", 00:17:50.377 "strip_size_kb": 0, 00:17:50.377 "state": "online", 00:17:50.377 "raid_level": "raid1", 00:17:50.377 "superblock": true, 00:17:50.377 "num_base_bdevs": 2, 00:17:50.377 "num_base_bdevs_discovered": 2, 00:17:50.377 "num_base_bdevs_operational": 2, 00:17:50.377 "base_bdevs_list": [ 00:17:50.377 { 00:17:50.377 "name": "BaseBdev1", 00:17:50.377 "uuid": "aee7353b-b486-43c8-b4e2-3073e3347a0a", 00:17:50.377 "is_configured": true, 00:17:50.377 "data_offset": 256, 00:17:50.377 "data_size": 7936 00:17:50.377 }, 00:17:50.377 { 00:17:50.377 "name": "BaseBdev2", 00:17:50.377 "uuid": "8cfc9d4e-1c83-4900-b2f7-7e769c14d868", 00:17:50.377 "is_configured": true, 00:17:50.377 "data_offset": 256, 00:17:50.377 "data_size": 7936 00:17:50.377 } 00:17:50.377 ] 00:17:50.377 } 00:17:50.377 } 00:17:50.377 }' 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.377 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:50.378 BaseBdev2' 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.378 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.638 [2024-11-18 10:46:16.336222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.638 "name": "Existed_Raid", 00:17:50.638 "uuid": "53e2fe82-2c48-4054-b43a-a668dfec8626", 00:17:50.638 "strip_size_kb": 0, 00:17:50.638 "state": "online", 00:17:50.638 "raid_level": "raid1", 00:17:50.638 "superblock": true, 00:17:50.638 "num_base_bdevs": 2, 00:17:50.638 "num_base_bdevs_discovered": 1, 00:17:50.638 "num_base_bdevs_operational": 1, 00:17:50.638 "base_bdevs_list": [ 00:17:50.638 { 00:17:50.638 "name": null, 00:17:50.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.638 "is_configured": false, 00:17:50.638 "data_offset": 0, 00:17:50.638 "data_size": 7936 00:17:50.638 }, 00:17:50.638 { 00:17:50.638 "name": "BaseBdev2", 00:17:50.638 "uuid": "8cfc9d4e-1c83-4900-b2f7-7e769c14d868", 00:17:50.638 "is_configured": true, 00:17:50.638 "data_offset": 256, 00:17:50.638 "data_size": 7936 00:17:50.638 } 00:17:50.638 ] 00:17:50.638 }' 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.638 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.209 [2024-11-18 10:46:16.854451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:51.209 [2024-11-18 10:46:16.854558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.209 [2024-11-18 10:46:16.946039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.209 [2024-11-18 10:46:16.946092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.209 [2024-11-18 10:46:16.946104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88252 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88252 ']' 00:17:51.209 10:46:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88252 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88252 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.209 killing process with pid 88252 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88252' 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88252 00:17:51.209 [2024-11-18 10:46:17.042607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.209 10:46:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88252 00:17:51.209 [2024-11-18 10:46:17.058436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.593 ************************************ 00:17:52.593 END TEST raid_state_function_test_sb_md_interleaved 00:17:52.593 ************************************ 00:17:52.593 10:46:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:52.593 00:17:52.593 real 0m4.737s 00:17:52.593 user 0m6.716s 00:17:52.593 sys 0m0.859s 00:17:52.593 10:46:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.593 10:46:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.593 10:46:18 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:52.593 10:46:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:52.593 10:46:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.593 10:46:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:52.593 ************************************ 00:17:52.593 START TEST raid_superblock_test_md_interleaved 00:17:52.593 ************************************ 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88498 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88498 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88498 ']' 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.593 10:46:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.593 [2024-11-18 10:46:18.284772] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:52.593 [2024-11-18 10:46:18.284979] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88498 ] 00:17:52.593 [2024-11-18 10:46:18.464098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.853 [2024-11-18 10:46:18.569573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.113 [2024-11-18 10:46:18.756649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.113 [2024-11-18 10:46:18.756747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.374 malloc1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.374 [2024-11-18 10:46:19.138826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.374 [2024-11-18 10:46:19.138959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.374 [2024-11-18 10:46:19.138983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.374 [2024-11-18 10:46:19.138993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.374 [2024-11-18 10:46:19.140831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.374 [2024-11-18 10:46:19.140867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.374 pt1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.374 malloc2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.374 [2024-11-18 10:46:19.194410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.374 [2024-11-18 10:46:19.194518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.374 [2024-11-18 10:46:19.194553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:53.374 [2024-11-18 10:46:19.194579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.374 [2024-11-18 10:46:19.196330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.374 [2024-11-18 10:46:19.196396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.374 pt2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.374 [2024-11-18 10:46:19.206426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.374 [2024-11-18 10:46:19.208195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.374 [2024-11-18 10:46:19.208407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:53.374 [2024-11-18 10:46:19.208453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:53.374 [2024-11-18 10:46:19.208539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:53.374 [2024-11-18 10:46:19.208641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:53.374 [2024-11-18 10:46:19.208681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:53.374 [2024-11-18 10:46:19.208783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.374 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.375 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.375 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.375 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.375 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.635 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.635 "name": "raid_bdev1", 00:17:53.635 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:53.635 "strip_size_kb": 0, 00:17:53.635 "state": "online", 00:17:53.635 "raid_level": "raid1", 00:17:53.635 "superblock": true, 00:17:53.635 "num_base_bdevs": 2, 00:17:53.635 "num_base_bdevs_discovered": 2, 00:17:53.635 "num_base_bdevs_operational": 2, 00:17:53.635 "base_bdevs_list": [ 00:17:53.635 { 00:17:53.635 "name": "pt1", 00:17:53.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.635 "is_configured": true, 00:17:53.635 "data_offset": 256, 00:17:53.635 "data_size": 7936 00:17:53.635 }, 00:17:53.635 { 00:17:53.635 "name": "pt2", 00:17:53.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.635 "is_configured": true, 00:17:53.635 "data_offset": 256, 00:17:53.635 "data_size": 7936 00:17:53.635 } 00:17:53.635 ] 00:17:53.635 }' 00:17:53.635 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.635 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.895 [2024-11-18 10:46:19.673946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.895 "name": "raid_bdev1", 00:17:53.895 "aliases": [ 00:17:53.895 "d69471cf-717a-439c-9ab8-24da85fea338" 00:17:53.895 ], 00:17:53.895 "product_name": "Raid Volume", 00:17:53.895 "block_size": 4128, 00:17:53.895 "num_blocks": 7936, 00:17:53.895 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:53.895 "md_size": 32, 00:17:53.895 "md_interleave": true, 00:17:53.895 "dif_type": 0, 00:17:53.895 "assigned_rate_limits": { 00:17:53.895 "rw_ios_per_sec": 0, 00:17:53.895 "rw_mbytes_per_sec": 0, 00:17:53.895 "r_mbytes_per_sec": 0, 00:17:53.895 "w_mbytes_per_sec": 0 00:17:53.895 }, 00:17:53.895 "claimed": false, 00:17:53.895 "zoned": false, 00:17:53.895 "supported_io_types": { 00:17:53.895 "read": true, 00:17:53.895 "write": true, 00:17:53.895 "unmap": false, 00:17:53.895 "flush": false, 00:17:53.895 "reset": true, 00:17:53.895 "nvme_admin": false, 00:17:53.895 "nvme_io": false, 00:17:53.895 "nvme_io_md": false, 00:17:53.895 "write_zeroes": true, 00:17:53.895 "zcopy": false, 00:17:53.895 "get_zone_info": false, 00:17:53.895 "zone_management": false, 00:17:53.895 "zone_append": false, 00:17:53.895 "compare": false, 00:17:53.895 "compare_and_write": false, 00:17:53.895 "abort": false, 00:17:53.895 "seek_hole": false, 00:17:53.895 "seek_data": false, 00:17:53.895 "copy": false, 00:17:53.895 "nvme_iov_md": false 00:17:53.895 }, 00:17:53.895 "memory_domains": [ 00:17:53.895 { 00:17:53.895 "dma_device_id": "system", 00:17:53.895 "dma_device_type": 1 00:17:53.895 }, 00:17:53.895 { 00:17:53.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.895 "dma_device_type": 2 00:17:53.895 }, 00:17:53.895 { 00:17:53.895 "dma_device_id": "system", 00:17:53.895 "dma_device_type": 1 00:17:53.895 }, 00:17:53.895 { 00:17:53.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.895 "dma_device_type": 2 00:17:53.895 } 00:17:53.895 ], 00:17:53.895 "driver_specific": { 00:17:53.895 "raid": { 00:17:53.895 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:53.895 "strip_size_kb": 0, 00:17:53.895 "state": "online", 00:17:53.895 "raid_level": "raid1", 00:17:53.895 "superblock": true, 00:17:53.895 "num_base_bdevs": 2, 00:17:53.895 "num_base_bdevs_discovered": 2, 00:17:53.895 "num_base_bdevs_operational": 2, 00:17:53.895 "base_bdevs_list": [ 00:17:53.895 { 00:17:53.895 "name": "pt1", 00:17:53.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.895 "is_configured": true, 00:17:53.895 "data_offset": 256, 00:17:53.895 "data_size": 7936 00:17:53.895 }, 00:17:53.895 { 00:17:53.895 "name": "pt2", 00:17:53.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.895 "is_configured": true, 00:17:53.895 "data_offset": 256, 00:17:53.895 "data_size": 7936 00:17:53.895 } 00:17:53.895 ] 00:17:53.895 } 00:17:53.895 } 00:17:53.895 }' 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:53.895 pt2' 00:17:53.895 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.156 [2024-11-18 10:46:19.929460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d69471cf-717a-439c-9ab8-24da85fea338 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z d69471cf-717a-439c-9ab8-24da85fea338 ']' 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.156 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.156 [2024-11-18 10:46:19.957166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.156 [2024-11-18 10:46:19.957199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.156 [2024-11-18 10:46:19.957270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.156 [2024-11-18 10:46:19.957317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.157 [2024-11-18 10:46:19.957327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:54.157 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.157 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.157 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.157 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.157 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:54.157 10:46:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.157 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.417 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.417 [2024-11-18 10:46:20.100936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:54.417 [2024-11-18 10:46:20.102768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:54.417 [2024-11-18 10:46:20.102835] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:54.417 [2024-11-18 10:46:20.102882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:54.417 [2024-11-18 10:46:20.102895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.417 [2024-11-18 10:46:20.102904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:54.417 request: 00:17:54.417 { 00:17:54.417 "name": "raid_bdev1", 00:17:54.417 "raid_level": "raid1", 00:17:54.417 "base_bdevs": [ 00:17:54.417 "malloc1", 00:17:54.417 "malloc2" 00:17:54.417 ], 00:17:54.417 "superblock": false, 00:17:54.417 "method": "bdev_raid_create", 00:17:54.417 "req_id": 1 00:17:54.417 } 00:17:54.417 Got JSON-RPC error response 00:17:54.417 response: 00:17:54.417 { 00:17:54.417 "code": -17, 00:17:54.417 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:54.417 } 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.418 [2024-11-18 10:46:20.156823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:54.418 [2024-11-18 10:46:20.156914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.418 [2024-11-18 10:46:20.156944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:54.418 [2024-11-18 10:46:20.156972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.418 [2024-11-18 10:46:20.158703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.418 [2024-11-18 10:46:20.158775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:54.418 [2024-11-18 10:46:20.158835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:54.418 [2024-11-18 10:46:20.158904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.418 pt1 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.418 "name": "raid_bdev1", 00:17:54.418 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:54.418 "strip_size_kb": 0, 00:17:54.418 "state": "configuring", 00:17:54.418 "raid_level": "raid1", 00:17:54.418 "superblock": true, 00:17:54.418 "num_base_bdevs": 2, 00:17:54.418 "num_base_bdevs_discovered": 1, 00:17:54.418 "num_base_bdevs_operational": 2, 00:17:54.418 "base_bdevs_list": [ 00:17:54.418 { 00:17:54.418 "name": "pt1", 00:17:54.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.418 "is_configured": true, 00:17:54.418 "data_offset": 256, 00:17:54.418 "data_size": 7936 00:17:54.418 }, 00:17:54.418 { 00:17:54.418 "name": null, 00:17:54.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.418 "is_configured": false, 00:17:54.418 "data_offset": 256, 00:17:54.418 "data_size": 7936 00:17:54.418 } 00:17:54.418 ] 00:17:54.418 }' 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.418 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.989 [2024-11-18 10:46:20.659972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.989 [2024-11-18 10:46:20.660030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.989 [2024-11-18 10:46:20.660047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:54.989 [2024-11-18 10:46:20.660056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.989 [2024-11-18 10:46:20.660165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.989 [2024-11-18 10:46:20.660198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.989 [2024-11-18 10:46:20.660234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:54.989 [2024-11-18 10:46:20.660255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.989 [2024-11-18 10:46:20.660359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:54.989 [2024-11-18 10:46:20.660372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:54.989 [2024-11-18 10:46:20.660436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:54.989 [2024-11-18 10:46:20.660504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:54.989 [2024-11-18 10:46:20.660512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:54.989 [2024-11-18 10:46:20.660565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.989 pt2 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.989 "name": "raid_bdev1", 00:17:54.989 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:54.989 "strip_size_kb": 0, 00:17:54.989 "state": "online", 00:17:54.989 "raid_level": "raid1", 00:17:54.989 "superblock": true, 00:17:54.989 "num_base_bdevs": 2, 00:17:54.989 "num_base_bdevs_discovered": 2, 00:17:54.989 "num_base_bdevs_operational": 2, 00:17:54.989 "base_bdevs_list": [ 00:17:54.989 { 00:17:54.989 "name": "pt1", 00:17:54.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.989 "is_configured": true, 00:17:54.989 "data_offset": 256, 00:17:54.989 "data_size": 7936 00:17:54.989 }, 00:17:54.989 { 00:17:54.989 "name": "pt2", 00:17:54.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.989 "is_configured": true, 00:17:54.989 "data_offset": 256, 00:17:54.989 "data_size": 7936 00:17:54.989 } 00:17:54.989 ] 00:17:54.989 }' 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.989 10:46:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.249 [2024-11-18 10:46:21.103603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.249 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.509 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.509 "name": "raid_bdev1", 00:17:55.509 "aliases": [ 00:17:55.509 "d69471cf-717a-439c-9ab8-24da85fea338" 00:17:55.509 ], 00:17:55.509 "product_name": "Raid Volume", 00:17:55.509 "block_size": 4128, 00:17:55.509 "num_blocks": 7936, 00:17:55.509 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:55.509 "md_size": 32, 00:17:55.509 "md_interleave": true, 00:17:55.509 "dif_type": 0, 00:17:55.509 "assigned_rate_limits": { 00:17:55.509 "rw_ios_per_sec": 0, 00:17:55.509 "rw_mbytes_per_sec": 0, 00:17:55.509 "r_mbytes_per_sec": 0, 00:17:55.509 "w_mbytes_per_sec": 0 00:17:55.509 }, 00:17:55.509 "claimed": false, 00:17:55.509 "zoned": false, 00:17:55.509 "supported_io_types": { 00:17:55.509 "read": true, 00:17:55.509 "write": true, 00:17:55.509 "unmap": false, 00:17:55.509 "flush": false, 00:17:55.509 "reset": true, 00:17:55.509 "nvme_admin": false, 00:17:55.509 "nvme_io": false, 00:17:55.509 "nvme_io_md": false, 00:17:55.509 "write_zeroes": true, 00:17:55.509 "zcopy": false, 00:17:55.509 "get_zone_info": false, 00:17:55.509 "zone_management": false, 00:17:55.509 "zone_append": false, 00:17:55.509 "compare": false, 00:17:55.509 "compare_and_write": false, 00:17:55.509 "abort": false, 00:17:55.509 "seek_hole": false, 00:17:55.509 "seek_data": false, 00:17:55.509 "copy": false, 00:17:55.509 "nvme_iov_md": false 00:17:55.509 }, 00:17:55.509 "memory_domains": [ 00:17:55.509 { 00:17:55.509 "dma_device_id": "system", 00:17:55.509 "dma_device_type": 1 00:17:55.509 }, 00:17:55.509 { 00:17:55.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.509 "dma_device_type": 2 00:17:55.509 }, 00:17:55.509 { 00:17:55.509 "dma_device_id": "system", 00:17:55.509 "dma_device_type": 1 00:17:55.509 }, 00:17:55.509 { 00:17:55.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.509 "dma_device_type": 2 00:17:55.509 } 00:17:55.509 ], 00:17:55.509 "driver_specific": { 00:17:55.509 "raid": { 00:17:55.509 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:55.509 "strip_size_kb": 0, 00:17:55.509 "state": "online", 00:17:55.509 "raid_level": "raid1", 00:17:55.509 "superblock": true, 00:17:55.509 "num_base_bdevs": 2, 00:17:55.509 "num_base_bdevs_discovered": 2, 00:17:55.509 "num_base_bdevs_operational": 2, 00:17:55.509 "base_bdevs_list": [ 00:17:55.509 { 00:17:55.509 "name": "pt1", 00:17:55.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.509 "is_configured": true, 00:17:55.509 "data_offset": 256, 00:17:55.509 "data_size": 7936 00:17:55.509 }, 00:17:55.509 { 00:17:55.509 "name": "pt2", 00:17:55.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.509 "is_configured": true, 00:17:55.509 "data_offset": 256, 00:17:55.509 "data_size": 7936 00:17:55.509 } 00:17:55.509 ] 00:17:55.509 } 00:17:55.509 } 00:17:55.509 }' 00:17:55.509 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:55.510 pt2' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 [2024-11-18 10:46:21.347167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' d69471cf-717a-439c-9ab8-24da85fea338 '!=' d69471cf-717a-439c-9ab8-24da85fea338 ']' 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.510 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.770 [2024-11-18 10:46:21.394890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.770 "name": "raid_bdev1", 00:17:55.770 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:55.770 "strip_size_kb": 0, 00:17:55.770 "state": "online", 00:17:55.770 "raid_level": "raid1", 00:17:55.770 "superblock": true, 00:17:55.770 "num_base_bdevs": 2, 00:17:55.770 "num_base_bdevs_discovered": 1, 00:17:55.770 "num_base_bdevs_operational": 1, 00:17:55.770 "base_bdevs_list": [ 00:17:55.770 { 00:17:55.770 "name": null, 00:17:55.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.770 "is_configured": false, 00:17:55.770 "data_offset": 0, 00:17:55.770 "data_size": 7936 00:17:55.770 }, 00:17:55.770 { 00:17:55.770 "name": "pt2", 00:17:55.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.770 "is_configured": true, 00:17:55.770 "data_offset": 256, 00:17:55.770 "data_size": 7936 00:17:55.770 } 00:17:55.770 ] 00:17:55.770 }' 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.770 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 [2024-11-18 10:46:21.834095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.031 [2024-11-18 10:46:21.834163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.031 [2024-11-18 10:46:21.834242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.031 [2024-11-18 10:46:21.834298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.031 [2024-11-18 10:46:21.834332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:56.031 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.032 [2024-11-18 10:46:21.890011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.032 [2024-11-18 10:46:21.890061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.032 [2024-11-18 10:46:21.890074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:56.032 [2024-11-18 10:46:21.890083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.032 [2024-11-18 10:46:21.891822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.032 [2024-11-18 10:46:21.891901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.032 [2024-11-18 10:46:21.891946] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:56.032 [2024-11-18 10:46:21.891988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.032 [2024-11-18 10:46:21.892044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:56.032 [2024-11-18 10:46:21.892055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:56.032 [2024-11-18 10:46:21.892130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.032 [2024-11-18 10:46:21.892201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:56.032 [2024-11-18 10:46:21.892209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:56.032 [2024-11-18 10:46:21.892261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.032 pt2 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.032 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.292 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.292 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.292 "name": "raid_bdev1", 00:17:56.292 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:56.292 "strip_size_kb": 0, 00:17:56.292 "state": "online", 00:17:56.292 "raid_level": "raid1", 00:17:56.292 "superblock": true, 00:17:56.292 "num_base_bdevs": 2, 00:17:56.292 "num_base_bdevs_discovered": 1, 00:17:56.292 "num_base_bdevs_operational": 1, 00:17:56.292 "base_bdevs_list": [ 00:17:56.292 { 00:17:56.292 "name": null, 00:17:56.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.292 "is_configured": false, 00:17:56.292 "data_offset": 256, 00:17:56.292 "data_size": 7936 00:17:56.292 }, 00:17:56.292 { 00:17:56.292 "name": "pt2", 00:17:56.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.292 "is_configured": true, 00:17:56.292 "data_offset": 256, 00:17:56.292 "data_size": 7936 00:17:56.292 } 00:17:56.292 ] 00:17:56.292 }' 00:17:56.292 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.292 10:46:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.553 [2024-11-18 10:46:22.369147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.553 [2024-11-18 10:46:22.369252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.553 [2024-11-18 10:46:22.369321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.553 [2024-11-18 10:46:22.369375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.553 [2024-11-18 10:46:22.369408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.553 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.553 [2024-11-18 10:46:22.433070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:56.553 [2024-11-18 10:46:22.433166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.553 [2024-11-18 10:46:22.433218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:56.553 [2024-11-18 10:46:22.433245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.553 [2024-11-18 10:46:22.435024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.553 [2024-11-18 10:46:22.435087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:56.553 [2024-11-18 10:46:22.435147] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:56.553 [2024-11-18 10:46:22.435228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:56.553 [2024-11-18 10:46:22.435341] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:56.553 [2024-11-18 10:46:22.435413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.553 [2024-11-18 10:46:22.435443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:56.553 [2024-11-18 10:46:22.435535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.553 [2024-11-18 10:46:22.435625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:56.553 [2024-11-18 10:46:22.435658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:56.553 [2024-11-18 10:46:22.435729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:56.553 [2024-11-18 10:46:22.435814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:56.553 [2024-11-18 10:46:22.435852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:56.553 [2024-11-18 10:46:22.435949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.815 pt1 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.815 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.816 "name": "raid_bdev1", 00:17:56.816 "uuid": "d69471cf-717a-439c-9ab8-24da85fea338", 00:17:56.816 "strip_size_kb": 0, 00:17:56.816 "state": "online", 00:17:56.816 "raid_level": "raid1", 00:17:56.816 "superblock": true, 00:17:56.816 "num_base_bdevs": 2, 00:17:56.816 "num_base_bdevs_discovered": 1, 00:17:56.816 "num_base_bdevs_operational": 1, 00:17:56.816 "base_bdevs_list": [ 00:17:56.816 { 00:17:56.816 "name": null, 00:17:56.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.816 "is_configured": false, 00:17:56.816 "data_offset": 256, 00:17:56.816 "data_size": 7936 00:17:56.816 }, 00:17:56.816 { 00:17:56.816 "name": "pt2", 00:17:56.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.816 "is_configured": true, 00:17:56.816 "data_offset": 256, 00:17:56.816 "data_size": 7936 00:17:56.816 } 00:17:56.816 ] 00:17:56.816 }' 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.816 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.091 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 [2024-11-18 10:46:22.952395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.369 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.369 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' d69471cf-717a-439c-9ab8-24da85fea338 '!=' d69471cf-717a-439c-9ab8-24da85fea338 ']' 00:17:57.370 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88498 00:17:57.370 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88498 ']' 00:17:57.370 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88498 00:17:57.370 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:57.370 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.370 10:46:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88498 00:17:57.370 10:46:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.370 killing process with pid 88498 00:17:57.370 10:46:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.370 10:46:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88498' 00:17:57.370 10:46:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88498 00:17:57.370 [2024-11-18 10:46:23.026087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.370 [2024-11-18 10:46:23.026151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.370 [2024-11-18 10:46:23.026198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.370 [2024-11-18 10:46:23.026212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:57.370 10:46:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88498 00:17:57.370 [2024-11-18 10:46:23.219017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.752 10:46:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:58.752 ************************************ 00:17:58.752 END TEST raid_superblock_test_md_interleaved 00:17:58.752 ************************************ 00:17:58.752 00:17:58.752 real 0m6.072s 00:17:58.752 user 0m9.231s 00:17:58.752 sys 0m1.157s 00:17:58.752 10:46:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.752 10:46:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.752 10:46:24 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:58.752 10:46:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:58.752 10:46:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.752 10:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.752 ************************************ 00:17:58.752 START TEST raid_rebuild_test_sb_md_interleaved 00:17:58.752 ************************************ 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88821 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88821 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88821 ']' 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.752 10:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.752 [2024-11-18 10:46:24.444762] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:58.752 [2024-11-18 10:46:24.444948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:58.752 Zero copy mechanism will not be used. 00:17:58.752 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88821 ] 00:17:58.752 [2024-11-18 10:46:24.626026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.012 [2024-11-18 10:46:24.736929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.271 [2024-11-18 10:46:24.915992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.271 [2024-11-18 10:46:24.916122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.531 BaseBdev1_malloc 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:59.531 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.532 [2024-11-18 10:46:25.292629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:59.532 [2024-11-18 10:46:25.292767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.532 [2024-11-18 10:46:25.292806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:59.532 [2024-11-18 10:46:25.292836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.532 [2024-11-18 10:46:25.294611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.532 [2024-11-18 10:46:25.294685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:59.532 BaseBdev1 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.532 BaseBdev2_malloc 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.532 [2024-11-18 10:46:25.347574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:59.532 [2024-11-18 10:46:25.347690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.532 [2024-11-18 10:46:25.347728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:59.532 [2024-11-18 10:46:25.347760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.532 [2024-11-18 10:46:25.349499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.532 [2024-11-18 10:46:25.349567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:59.532 BaseBdev2 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.532 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.791 spare_malloc 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.791 spare_delay 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:59.791 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.792 [2024-11-18 10:46:25.446027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:59.792 [2024-11-18 10:46:25.446085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.792 [2024-11-18 10:46:25.446105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:59.792 [2024-11-18 10:46:25.446115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.792 [2024-11-18 10:46:25.447868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.792 [2024-11-18 10:46:25.447907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:59.792 spare 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.792 [2024-11-18 10:46:25.458045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.792 [2024-11-18 10:46:25.459765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.792 [2024-11-18 10:46:25.459953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:59.792 [2024-11-18 10:46:25.459966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.792 [2024-11-18 10:46:25.460037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:59.792 [2024-11-18 10:46:25.460101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:59.792 [2024-11-18 10:46:25.460108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:59.792 [2024-11-18 10:46:25.460167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.792 "name": "raid_bdev1", 00:17:59.792 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:17:59.792 "strip_size_kb": 0, 00:17:59.792 "state": "online", 00:17:59.792 "raid_level": "raid1", 00:17:59.792 "superblock": true, 00:17:59.792 "num_base_bdevs": 2, 00:17:59.792 "num_base_bdevs_discovered": 2, 00:17:59.792 "num_base_bdevs_operational": 2, 00:17:59.792 "base_bdevs_list": [ 00:17:59.792 { 00:17:59.792 "name": "BaseBdev1", 00:17:59.792 "uuid": "ae580310-4f8f-53b0-8aa8-fed7e8d619d6", 00:17:59.792 "is_configured": true, 00:17:59.792 "data_offset": 256, 00:17:59.792 "data_size": 7936 00:17:59.792 }, 00:17:59.792 { 00:17:59.792 "name": "BaseBdev2", 00:17:59.792 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:17:59.792 "is_configured": true, 00:17:59.792 "data_offset": 256, 00:17:59.792 "data_size": 7936 00:17:59.792 } 00:17:59.792 ] 00:17:59.792 }' 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.792 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 [2024-11-18 10:46:25.957427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 10:46:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 [2024-11-18 10:46:26.033023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.362 "name": "raid_bdev1", 00:18:00.362 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:00.362 "strip_size_kb": 0, 00:18:00.362 "state": "online", 00:18:00.362 "raid_level": "raid1", 00:18:00.362 "superblock": true, 00:18:00.362 "num_base_bdevs": 2, 00:18:00.362 "num_base_bdevs_discovered": 1, 00:18:00.362 "num_base_bdevs_operational": 1, 00:18:00.362 "base_bdevs_list": [ 00:18:00.362 { 00:18:00.362 "name": null, 00:18:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.362 "is_configured": false, 00:18:00.362 "data_offset": 0, 00:18:00.362 "data_size": 7936 00:18:00.362 }, 00:18:00.362 { 00:18:00.362 "name": "BaseBdev2", 00:18:00.362 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:00.362 "is_configured": true, 00:18:00.362 "data_offset": 256, 00:18:00.362 "data_size": 7936 00:18:00.362 } 00:18:00.362 ] 00:18:00.362 }' 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.362 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.623 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.623 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.623 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.623 [2024-11-18 10:46:26.492278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.883 [2024-11-18 10:46:26.509717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:00.883 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.883 [2024-11-18 10:46:26.511486] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.883 10:46:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.823 "name": "raid_bdev1", 00:18:01.823 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:01.823 "strip_size_kb": 0, 00:18:01.823 "state": "online", 00:18:01.823 "raid_level": "raid1", 00:18:01.823 "superblock": true, 00:18:01.823 "num_base_bdevs": 2, 00:18:01.823 "num_base_bdevs_discovered": 2, 00:18:01.823 "num_base_bdevs_operational": 2, 00:18:01.823 "process": { 00:18:01.823 "type": "rebuild", 00:18:01.823 "target": "spare", 00:18:01.823 "progress": { 00:18:01.823 "blocks": 2560, 00:18:01.823 "percent": 32 00:18:01.823 } 00:18:01.823 }, 00:18:01.823 "base_bdevs_list": [ 00:18:01.823 { 00:18:01.823 "name": "spare", 00:18:01.823 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:01.823 "is_configured": true, 00:18:01.823 "data_offset": 256, 00:18:01.823 "data_size": 7936 00:18:01.823 }, 00:18:01.823 { 00:18:01.823 "name": "BaseBdev2", 00:18:01.823 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:01.823 "is_configured": true, 00:18:01.823 "data_offset": 256, 00:18:01.823 "data_size": 7936 00:18:01.823 } 00:18:01.823 ] 00:18:01.823 }' 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.823 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.823 [2024-11-18 10:46:27.671408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.083 [2024-11-18 10:46:27.716224] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.083 [2024-11-18 10:46:27.716298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.083 [2024-11-18 10:46:27.716312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.083 [2024-11-18 10:46:27.716324] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.083 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.083 "name": "raid_bdev1", 00:18:02.083 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:02.083 "strip_size_kb": 0, 00:18:02.083 "state": "online", 00:18:02.083 "raid_level": "raid1", 00:18:02.083 "superblock": true, 00:18:02.083 "num_base_bdevs": 2, 00:18:02.083 "num_base_bdevs_discovered": 1, 00:18:02.083 "num_base_bdevs_operational": 1, 00:18:02.083 "base_bdevs_list": [ 00:18:02.083 { 00:18:02.083 "name": null, 00:18:02.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.083 "is_configured": false, 00:18:02.083 "data_offset": 0, 00:18:02.083 "data_size": 7936 00:18:02.083 }, 00:18:02.083 { 00:18:02.083 "name": "BaseBdev2", 00:18:02.083 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:02.083 "is_configured": true, 00:18:02.083 "data_offset": 256, 00:18:02.083 "data_size": 7936 00:18:02.083 } 00:18:02.084 ] 00:18:02.084 }' 00:18:02.084 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.084 10:46:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.343 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.603 "name": "raid_bdev1", 00:18:02.603 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:02.603 "strip_size_kb": 0, 00:18:02.603 "state": "online", 00:18:02.603 "raid_level": "raid1", 00:18:02.603 "superblock": true, 00:18:02.603 "num_base_bdevs": 2, 00:18:02.603 "num_base_bdevs_discovered": 1, 00:18:02.603 "num_base_bdevs_operational": 1, 00:18:02.603 "base_bdevs_list": [ 00:18:02.603 { 00:18:02.603 "name": null, 00:18:02.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.603 "is_configured": false, 00:18:02.603 "data_offset": 0, 00:18:02.603 "data_size": 7936 00:18:02.603 }, 00:18:02.603 { 00:18:02.603 "name": "BaseBdev2", 00:18:02.603 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:02.603 "is_configured": true, 00:18:02.603 "data_offset": 256, 00:18:02.603 "data_size": 7936 00:18:02.603 } 00:18:02.603 ] 00:18:02.603 }' 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.603 [2024-11-18 10:46:28.325267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.603 [2024-11-18 10:46:28.340839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.603 [2024-11-18 10:46:28.342616] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.603 10:46:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.543 "name": "raid_bdev1", 00:18:03.543 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:03.543 "strip_size_kb": 0, 00:18:03.543 "state": "online", 00:18:03.543 "raid_level": "raid1", 00:18:03.543 "superblock": true, 00:18:03.543 "num_base_bdevs": 2, 00:18:03.543 "num_base_bdevs_discovered": 2, 00:18:03.543 "num_base_bdevs_operational": 2, 00:18:03.543 "process": { 00:18:03.543 "type": "rebuild", 00:18:03.543 "target": "spare", 00:18:03.543 "progress": { 00:18:03.543 "blocks": 2560, 00:18:03.543 "percent": 32 00:18:03.543 } 00:18:03.543 }, 00:18:03.543 "base_bdevs_list": [ 00:18:03.543 { 00:18:03.543 "name": "spare", 00:18:03.543 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:03.543 "is_configured": true, 00:18:03.543 "data_offset": 256, 00:18:03.543 "data_size": 7936 00:18:03.543 }, 00:18:03.543 { 00:18:03.543 "name": "BaseBdev2", 00:18:03.543 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:03.543 "is_configured": true, 00:18:03.543 "data_offset": 256, 00:18:03.543 "data_size": 7936 00:18:03.543 } 00:18:03.543 ] 00:18:03.543 }' 00:18:03.543 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:03.803 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=731 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.803 "name": "raid_bdev1", 00:18:03.803 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:03.803 "strip_size_kb": 0, 00:18:03.803 "state": "online", 00:18:03.803 "raid_level": "raid1", 00:18:03.803 "superblock": true, 00:18:03.803 "num_base_bdevs": 2, 00:18:03.803 "num_base_bdevs_discovered": 2, 00:18:03.803 "num_base_bdevs_operational": 2, 00:18:03.803 "process": { 00:18:03.803 "type": "rebuild", 00:18:03.803 "target": "spare", 00:18:03.803 "progress": { 00:18:03.803 "blocks": 2816, 00:18:03.803 "percent": 35 00:18:03.803 } 00:18:03.803 }, 00:18:03.803 "base_bdevs_list": [ 00:18:03.803 { 00:18:03.803 "name": "spare", 00:18:03.803 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:03.803 "is_configured": true, 00:18:03.803 "data_offset": 256, 00:18:03.803 "data_size": 7936 00:18:03.803 }, 00:18:03.803 { 00:18:03.803 "name": "BaseBdev2", 00:18:03.803 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:03.803 "is_configured": true, 00:18:03.803 "data_offset": 256, 00:18:03.803 "data_size": 7936 00:18:03.803 } 00:18:03.803 ] 00:18:03.803 }' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.803 10:46:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.184 "name": "raid_bdev1", 00:18:05.184 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:05.184 "strip_size_kb": 0, 00:18:05.184 "state": "online", 00:18:05.184 "raid_level": "raid1", 00:18:05.184 "superblock": true, 00:18:05.184 "num_base_bdevs": 2, 00:18:05.184 "num_base_bdevs_discovered": 2, 00:18:05.184 "num_base_bdevs_operational": 2, 00:18:05.184 "process": { 00:18:05.184 "type": "rebuild", 00:18:05.184 "target": "spare", 00:18:05.184 "progress": { 00:18:05.184 "blocks": 5888, 00:18:05.184 "percent": 74 00:18:05.184 } 00:18:05.184 }, 00:18:05.184 "base_bdevs_list": [ 00:18:05.184 { 00:18:05.184 "name": "spare", 00:18:05.184 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:05.184 "is_configured": true, 00:18:05.184 "data_offset": 256, 00:18:05.184 "data_size": 7936 00:18:05.184 }, 00:18:05.184 { 00:18:05.184 "name": "BaseBdev2", 00:18:05.184 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:05.184 "is_configured": true, 00:18:05.184 "data_offset": 256, 00:18:05.184 "data_size": 7936 00:18:05.184 } 00:18:05.184 ] 00:18:05.184 }' 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.184 10:46:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.755 [2024-11-18 10:46:31.454212] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:05.755 [2024-11-18 10:46:31.454336] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:05.755 [2024-11-18 10:46:31.454430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.015 "name": "raid_bdev1", 00:18:06.015 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:06.015 "strip_size_kb": 0, 00:18:06.015 "state": "online", 00:18:06.015 "raid_level": "raid1", 00:18:06.015 "superblock": true, 00:18:06.015 "num_base_bdevs": 2, 00:18:06.015 "num_base_bdevs_discovered": 2, 00:18:06.015 "num_base_bdevs_operational": 2, 00:18:06.015 "base_bdevs_list": [ 00:18:06.015 { 00:18:06.015 "name": "spare", 00:18:06.015 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:06.015 "is_configured": true, 00:18:06.015 "data_offset": 256, 00:18:06.015 "data_size": 7936 00:18:06.015 }, 00:18:06.015 { 00:18:06.015 "name": "BaseBdev2", 00:18:06.015 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:06.015 "is_configured": true, 00:18:06.015 "data_offset": 256, 00:18:06.015 "data_size": 7936 00:18:06.015 } 00:18:06.015 ] 00:18:06.015 }' 00:18:06.015 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.276 "name": "raid_bdev1", 00:18:06.276 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:06.276 "strip_size_kb": 0, 00:18:06.276 "state": "online", 00:18:06.276 "raid_level": "raid1", 00:18:06.276 "superblock": true, 00:18:06.276 "num_base_bdevs": 2, 00:18:06.276 "num_base_bdevs_discovered": 2, 00:18:06.276 "num_base_bdevs_operational": 2, 00:18:06.276 "base_bdevs_list": [ 00:18:06.276 { 00:18:06.276 "name": "spare", 00:18:06.276 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:06.276 "is_configured": true, 00:18:06.276 "data_offset": 256, 00:18:06.276 "data_size": 7936 00:18:06.276 }, 00:18:06.276 { 00:18:06.276 "name": "BaseBdev2", 00:18:06.276 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:06.276 "is_configured": true, 00:18:06.276 "data_offset": 256, 00:18:06.276 "data_size": 7936 00:18:06.276 } 00:18:06.276 ] 00:18:06.276 }' 00:18:06.276 10:46:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.276 "name": "raid_bdev1", 00:18:06.276 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:06.276 "strip_size_kb": 0, 00:18:06.276 "state": "online", 00:18:06.276 "raid_level": "raid1", 00:18:06.276 "superblock": true, 00:18:06.276 "num_base_bdevs": 2, 00:18:06.276 "num_base_bdevs_discovered": 2, 00:18:06.276 "num_base_bdevs_operational": 2, 00:18:06.276 "base_bdevs_list": [ 00:18:06.276 { 00:18:06.276 "name": "spare", 00:18:06.276 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:06.276 "is_configured": true, 00:18:06.276 "data_offset": 256, 00:18:06.276 "data_size": 7936 00:18:06.276 }, 00:18:06.276 { 00:18:06.276 "name": "BaseBdev2", 00:18:06.276 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:06.276 "is_configured": true, 00:18:06.276 "data_offset": 256, 00:18:06.276 "data_size": 7936 00:18:06.276 } 00:18:06.276 ] 00:18:06.276 }' 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.276 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.847 [2024-11-18 10:46:32.516804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.847 [2024-11-18 10:46:32.516882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.847 [2024-11-18 10:46:32.516979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.847 [2024-11-18 10:46:32.517043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.847 [2024-11-18 10:46:32.517053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.847 [2024-11-18 10:46:32.588692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.847 [2024-11-18 10:46:32.588760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.847 [2024-11-18 10:46:32.588783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:06.847 [2024-11-18 10:46:32.588792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.847 [2024-11-18 10:46:32.590631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.847 [2024-11-18 10:46:32.590664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.847 [2024-11-18 10:46:32.590713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.847 [2024-11-18 10:46:32.590762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.847 [2024-11-18 10:46:32.590853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.847 spare 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.847 [2024-11-18 10:46:32.690732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:06.847 [2024-11-18 10:46:32.690761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:06.847 [2024-11-18 10:46:32.690840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:06.847 [2024-11-18 10:46:32.690910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:06.847 [2024-11-18 10:46:32.690918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:06.847 [2024-11-18 10:46:32.690988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.847 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.848 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.107 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.107 "name": "raid_bdev1", 00:18:07.107 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:07.107 "strip_size_kb": 0, 00:18:07.107 "state": "online", 00:18:07.107 "raid_level": "raid1", 00:18:07.107 "superblock": true, 00:18:07.107 "num_base_bdevs": 2, 00:18:07.107 "num_base_bdevs_discovered": 2, 00:18:07.107 "num_base_bdevs_operational": 2, 00:18:07.107 "base_bdevs_list": [ 00:18:07.107 { 00:18:07.107 "name": "spare", 00:18:07.107 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:07.107 "is_configured": true, 00:18:07.107 "data_offset": 256, 00:18:07.107 "data_size": 7936 00:18:07.107 }, 00:18:07.107 { 00:18:07.107 "name": "BaseBdev2", 00:18:07.107 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:07.107 "is_configured": true, 00:18:07.107 "data_offset": 256, 00:18:07.107 "data_size": 7936 00:18:07.107 } 00:18:07.107 ] 00:18:07.107 }' 00:18:07.107 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.107 10:46:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.368 "name": "raid_bdev1", 00:18:07.368 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:07.368 "strip_size_kb": 0, 00:18:07.368 "state": "online", 00:18:07.368 "raid_level": "raid1", 00:18:07.368 "superblock": true, 00:18:07.368 "num_base_bdevs": 2, 00:18:07.368 "num_base_bdevs_discovered": 2, 00:18:07.368 "num_base_bdevs_operational": 2, 00:18:07.368 "base_bdevs_list": [ 00:18:07.368 { 00:18:07.368 "name": "spare", 00:18:07.368 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:07.368 "is_configured": true, 00:18:07.368 "data_offset": 256, 00:18:07.368 "data_size": 7936 00:18:07.368 }, 00:18:07.368 { 00:18:07.368 "name": "BaseBdev2", 00:18:07.368 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:07.368 "is_configured": true, 00:18:07.368 "data_offset": 256, 00:18:07.368 "data_size": 7936 00:18:07.368 } 00:18:07.368 ] 00:18:07.368 }' 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 [2024-11-18 10:46:33.191667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.368 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.368 "name": "raid_bdev1", 00:18:07.368 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:07.368 "strip_size_kb": 0, 00:18:07.368 "state": "online", 00:18:07.368 "raid_level": "raid1", 00:18:07.368 "superblock": true, 00:18:07.368 "num_base_bdevs": 2, 00:18:07.368 "num_base_bdevs_discovered": 1, 00:18:07.368 "num_base_bdevs_operational": 1, 00:18:07.368 "base_bdevs_list": [ 00:18:07.368 { 00:18:07.368 "name": null, 00:18:07.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.368 "is_configured": false, 00:18:07.368 "data_offset": 0, 00:18:07.368 "data_size": 7936 00:18:07.368 }, 00:18:07.368 { 00:18:07.368 "name": "BaseBdev2", 00:18:07.368 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:07.368 "is_configured": true, 00:18:07.368 "data_offset": 256, 00:18:07.368 "data_size": 7936 00:18:07.368 } 00:18:07.368 ] 00:18:07.368 }' 00:18:07.369 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.369 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.938 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.938 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.938 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.939 [2024-11-18 10:46:33.595250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.939 [2024-11-18 10:46:33.595426] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.939 [2024-11-18 10:46:33.595446] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.939 [2024-11-18 10:46:33.595475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.939 [2024-11-18 10:46:33.610374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:07.939 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.939 10:46:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:07.939 [2024-11-18 10:46:33.612110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.879 "name": "raid_bdev1", 00:18:08.879 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:08.879 "strip_size_kb": 0, 00:18:08.879 "state": "online", 00:18:08.879 "raid_level": "raid1", 00:18:08.879 "superblock": true, 00:18:08.879 "num_base_bdevs": 2, 00:18:08.879 "num_base_bdevs_discovered": 2, 00:18:08.879 "num_base_bdevs_operational": 2, 00:18:08.879 "process": { 00:18:08.879 "type": "rebuild", 00:18:08.879 "target": "spare", 00:18:08.879 "progress": { 00:18:08.879 "blocks": 2560, 00:18:08.879 "percent": 32 00:18:08.879 } 00:18:08.879 }, 00:18:08.879 "base_bdevs_list": [ 00:18:08.879 { 00:18:08.879 "name": "spare", 00:18:08.879 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:08.879 "is_configured": true, 00:18:08.879 "data_offset": 256, 00:18:08.879 "data_size": 7936 00:18:08.879 }, 00:18:08.879 { 00:18:08.879 "name": "BaseBdev2", 00:18:08.879 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:08.879 "is_configured": true, 00:18:08.879 "data_offset": 256, 00:18:08.879 "data_size": 7936 00:18:08.879 } 00:18:08.879 ] 00:18:08.879 }' 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.879 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.879 [2024-11-18 10:46:34.751791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.139 [2024-11-18 10:46:34.816672] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.139 [2024-11-18 10:46:34.816780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.139 [2024-11-18 10:46:34.816814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.139 [2024-11-18 10:46:34.816826] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.139 "name": "raid_bdev1", 00:18:09.139 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:09.139 "strip_size_kb": 0, 00:18:09.139 "state": "online", 00:18:09.139 "raid_level": "raid1", 00:18:09.139 "superblock": true, 00:18:09.139 "num_base_bdevs": 2, 00:18:09.139 "num_base_bdevs_discovered": 1, 00:18:09.139 "num_base_bdevs_operational": 1, 00:18:09.139 "base_bdevs_list": [ 00:18:09.139 { 00:18:09.139 "name": null, 00:18:09.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.139 "is_configured": false, 00:18:09.139 "data_offset": 0, 00:18:09.139 "data_size": 7936 00:18:09.139 }, 00:18:09.139 { 00:18:09.139 "name": "BaseBdev2", 00:18:09.139 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:09.139 "is_configured": true, 00:18:09.139 "data_offset": 256, 00:18:09.139 "data_size": 7936 00:18:09.139 } 00:18:09.139 ] 00:18:09.139 }' 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.139 10:46:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.400 10:46:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.400 10:46:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.400 10:46:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.660 [2024-11-18 10:46:35.285573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.660 [2024-11-18 10:46:35.285673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.660 [2024-11-18 10:46:35.285715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:09.660 [2024-11-18 10:46:35.285746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.660 [2024-11-18 10:46:35.285937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.660 [2024-11-18 10:46:35.285986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.660 [2024-11-18 10:46:35.286058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:09.660 [2024-11-18 10:46:35.286095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.660 [2024-11-18 10:46:35.286136] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:09.660 [2024-11-18 10:46:35.286227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.660 [2024-11-18 10:46:35.300703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:09.660 spare 00:18:09.660 10:46:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.660 [2024-11-18 10:46:35.302409] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.660 10:46:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.599 "name": "raid_bdev1", 00:18:10.599 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:10.599 "strip_size_kb": 0, 00:18:10.599 "state": "online", 00:18:10.599 "raid_level": "raid1", 00:18:10.599 "superblock": true, 00:18:10.599 "num_base_bdevs": 2, 00:18:10.599 "num_base_bdevs_discovered": 2, 00:18:10.599 "num_base_bdevs_operational": 2, 00:18:10.599 "process": { 00:18:10.599 "type": "rebuild", 00:18:10.599 "target": "spare", 00:18:10.599 "progress": { 00:18:10.599 "blocks": 2560, 00:18:10.599 "percent": 32 00:18:10.599 } 00:18:10.599 }, 00:18:10.599 "base_bdevs_list": [ 00:18:10.599 { 00:18:10.599 "name": "spare", 00:18:10.599 "uuid": "610e1650-fed6-5973-b37e-c82eaff8d952", 00:18:10.599 "is_configured": true, 00:18:10.599 "data_offset": 256, 00:18:10.599 "data_size": 7936 00:18:10.599 }, 00:18:10.599 { 00:18:10.599 "name": "BaseBdev2", 00:18:10.599 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:10.599 "is_configured": true, 00:18:10.599 "data_offset": 256, 00:18:10.599 "data_size": 7936 00:18:10.599 } 00:18:10.599 ] 00:18:10.599 }' 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.599 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.599 [2024-11-18 10:46:36.454184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.859 [2024-11-18 10:46:36.506951] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.859 [2024-11-18 10:46:36.507007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.859 [2024-11-18 10:46:36.507023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.859 [2024-11-18 10:46:36.507030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.859 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.859 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.860 "name": "raid_bdev1", 00:18:10.860 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:10.860 "strip_size_kb": 0, 00:18:10.860 "state": "online", 00:18:10.860 "raid_level": "raid1", 00:18:10.860 "superblock": true, 00:18:10.860 "num_base_bdevs": 2, 00:18:10.860 "num_base_bdevs_discovered": 1, 00:18:10.860 "num_base_bdevs_operational": 1, 00:18:10.860 "base_bdevs_list": [ 00:18:10.860 { 00:18:10.860 "name": null, 00:18:10.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.860 "is_configured": false, 00:18:10.860 "data_offset": 0, 00:18:10.860 "data_size": 7936 00:18:10.860 }, 00:18:10.860 { 00:18:10.860 "name": "BaseBdev2", 00:18:10.860 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:10.860 "is_configured": true, 00:18:10.860 "data_offset": 256, 00:18:10.860 "data_size": 7936 00:18:10.860 } 00:18:10.860 ] 00:18:10.860 }' 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.860 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.119 10:46:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.380 "name": "raid_bdev1", 00:18:11.380 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:11.380 "strip_size_kb": 0, 00:18:11.380 "state": "online", 00:18:11.380 "raid_level": "raid1", 00:18:11.380 "superblock": true, 00:18:11.380 "num_base_bdevs": 2, 00:18:11.380 "num_base_bdevs_discovered": 1, 00:18:11.380 "num_base_bdevs_operational": 1, 00:18:11.380 "base_bdevs_list": [ 00:18:11.380 { 00:18:11.380 "name": null, 00:18:11.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.380 "is_configured": false, 00:18:11.380 "data_offset": 0, 00:18:11.380 "data_size": 7936 00:18:11.380 }, 00:18:11.380 { 00:18:11.380 "name": "BaseBdev2", 00:18:11.380 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:11.380 "is_configured": true, 00:18:11.380 "data_offset": 256, 00:18:11.380 "data_size": 7936 00:18:11.380 } 00:18:11.380 ] 00:18:11.380 }' 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.380 [2024-11-18 10:46:37.130573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:11.380 [2024-11-18 10:46:37.130628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.380 [2024-11-18 10:46:37.130652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:11.380 [2024-11-18 10:46:37.130660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.380 [2024-11-18 10:46:37.130802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.380 [2024-11-18 10:46:37.130812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:11.380 [2024-11-18 10:46:37.130858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:11.380 [2024-11-18 10:46:37.130870] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.380 [2024-11-18 10:46:37.130878] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:11.380 [2024-11-18 10:46:37.130887] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:11.380 BaseBdev1 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.380 10:46:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.319 "name": "raid_bdev1", 00:18:12.319 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:12.319 "strip_size_kb": 0, 00:18:12.319 "state": "online", 00:18:12.319 "raid_level": "raid1", 00:18:12.319 "superblock": true, 00:18:12.319 "num_base_bdevs": 2, 00:18:12.319 "num_base_bdevs_discovered": 1, 00:18:12.319 "num_base_bdevs_operational": 1, 00:18:12.319 "base_bdevs_list": [ 00:18:12.319 { 00:18:12.319 "name": null, 00:18:12.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.319 "is_configured": false, 00:18:12.319 "data_offset": 0, 00:18:12.319 "data_size": 7936 00:18:12.319 }, 00:18:12.319 { 00:18:12.319 "name": "BaseBdev2", 00:18:12.319 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:12.319 "is_configured": true, 00:18:12.319 "data_offset": 256, 00:18:12.319 "data_size": 7936 00:18:12.319 } 00:18:12.319 ] 00:18:12.319 }' 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.319 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.888 "name": "raid_bdev1", 00:18:12.888 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:12.888 "strip_size_kb": 0, 00:18:12.888 "state": "online", 00:18:12.888 "raid_level": "raid1", 00:18:12.888 "superblock": true, 00:18:12.888 "num_base_bdevs": 2, 00:18:12.888 "num_base_bdevs_discovered": 1, 00:18:12.888 "num_base_bdevs_operational": 1, 00:18:12.888 "base_bdevs_list": [ 00:18:12.888 { 00:18:12.888 "name": null, 00:18:12.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.888 "is_configured": false, 00:18:12.888 "data_offset": 0, 00:18:12.888 "data_size": 7936 00:18:12.888 }, 00:18:12.888 { 00:18:12.888 "name": "BaseBdev2", 00:18:12.888 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:12.888 "is_configured": true, 00:18:12.888 "data_offset": 256, 00:18:12.888 "data_size": 7936 00:18:12.888 } 00:18:12.888 ] 00:18:12.888 }' 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.888 [2024-11-18 10:46:38.744195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.888 [2024-11-18 10:46:38.744300] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.888 [2024-11-18 10:46:38.744315] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:12.888 request: 00:18:12.888 { 00:18:12.888 "base_bdev": "BaseBdev1", 00:18:12.888 "raid_bdev": "raid_bdev1", 00:18:12.888 "method": "bdev_raid_add_base_bdev", 00:18:12.888 "req_id": 1 00:18:12.888 } 00:18:12.888 Got JSON-RPC error response 00:18:12.888 response: 00:18:12.888 { 00:18:12.888 "code": -22, 00:18:12.888 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:12.888 } 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.888 10:46:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.274 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.274 "name": "raid_bdev1", 00:18:14.274 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:14.274 "strip_size_kb": 0, 00:18:14.274 "state": "online", 00:18:14.274 "raid_level": "raid1", 00:18:14.274 "superblock": true, 00:18:14.274 "num_base_bdevs": 2, 00:18:14.274 "num_base_bdevs_discovered": 1, 00:18:14.274 "num_base_bdevs_operational": 1, 00:18:14.274 "base_bdevs_list": [ 00:18:14.274 { 00:18:14.274 "name": null, 00:18:14.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.275 "is_configured": false, 00:18:14.275 "data_offset": 0, 00:18:14.275 "data_size": 7936 00:18:14.275 }, 00:18:14.275 { 00:18:14.275 "name": "BaseBdev2", 00:18:14.275 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:14.275 "is_configured": true, 00:18:14.275 "data_offset": 256, 00:18:14.275 "data_size": 7936 00:18:14.275 } 00:18:14.275 ] 00:18:14.275 }' 00:18:14.275 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.275 10:46:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.535 "name": "raid_bdev1", 00:18:14.535 "uuid": "431a6139-f1ce-4aca-88b1-75d503f274de", 00:18:14.535 "strip_size_kb": 0, 00:18:14.535 "state": "online", 00:18:14.535 "raid_level": "raid1", 00:18:14.535 "superblock": true, 00:18:14.535 "num_base_bdevs": 2, 00:18:14.535 "num_base_bdevs_discovered": 1, 00:18:14.535 "num_base_bdevs_operational": 1, 00:18:14.535 "base_bdevs_list": [ 00:18:14.535 { 00:18:14.535 "name": null, 00:18:14.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.535 "is_configured": false, 00:18:14.535 "data_offset": 0, 00:18:14.535 "data_size": 7936 00:18:14.535 }, 00:18:14.535 { 00:18:14.535 "name": "BaseBdev2", 00:18:14.535 "uuid": "a773e2ca-3008-576e-903f-19b0e666a202", 00:18:14.535 "is_configured": true, 00:18:14.535 "data_offset": 256, 00:18:14.535 "data_size": 7936 00:18:14.535 } 00:18:14.535 ] 00:18:14.535 }' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88821 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88821 ']' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88821 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88821 00:18:14.535 killing process with pid 88821 00:18:14.535 Received shutdown signal, test time was about 60.000000 seconds 00:18:14.535 00:18:14.535 Latency(us) 00:18:14.535 [2024-11-18T10:46:40.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.535 [2024-11-18T10:46:40.420Z] =================================================================================================================== 00:18:14.535 [2024-11-18T10:46:40.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88821' 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88821 00:18:14.535 [2024-11-18 10:46:40.376049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.535 [2024-11-18 10:46:40.376138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.535 [2024-11-18 10:46:40.376189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.535 [2024-11-18 10:46:40.376202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:14.535 10:46:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88821 00:18:14.796 [2024-11-18 10:46:40.658510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.215 10:46:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:16.215 00:18:16.215 real 0m17.339s 00:18:16.215 user 0m22.679s 00:18:16.215 sys 0m1.696s 00:18:16.215 10:46:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.215 ************************************ 00:18:16.215 END TEST raid_rebuild_test_sb_md_interleaved 00:18:16.215 ************************************ 00:18:16.215 10:46:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.215 10:46:41 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:16.215 10:46:41 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:16.215 10:46:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88821 ']' 00:18:16.215 10:46:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88821 00:18:16.215 10:46:41 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:16.215 ************************************ 00:18:16.215 END TEST bdev_raid 00:18:16.215 ************************************ 00:18:16.215 00:18:16.215 real 11m52.801s 00:18:16.215 user 15m54.328s 00:18:16.215 sys 1m59.367s 00:18:16.215 10:46:41 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.215 10:46:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.215 10:46:41 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:16.215 10:46:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.215 10:46:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.215 10:46:41 -- common/autotest_common.sh@10 -- # set +x 00:18:16.215 ************************************ 00:18:16.215 START TEST spdkcli_raid 00:18:16.215 ************************************ 00:18:16.215 10:46:41 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:16.215 * Looking for test storage... 00:18:16.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:16.215 10:46:41 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:16.215 10:46:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:16.215 10:46:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:16.215 10:46:42 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.215 10:46:42 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:16.215 10:46:42 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.215 10:46:42 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:16.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.215 --rc genhtml_branch_coverage=1 00:18:16.215 --rc genhtml_function_coverage=1 00:18:16.215 --rc genhtml_legend=1 00:18:16.215 --rc geninfo_all_blocks=1 00:18:16.215 --rc geninfo_unexecuted_blocks=1 00:18:16.215 00:18:16.215 ' 00:18:16.215 10:46:42 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:16.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.215 --rc genhtml_branch_coverage=1 00:18:16.215 --rc genhtml_function_coverage=1 00:18:16.215 --rc genhtml_legend=1 00:18:16.215 --rc geninfo_all_blocks=1 00:18:16.215 --rc geninfo_unexecuted_blocks=1 00:18:16.215 00:18:16.215 ' 00:18:16.215 10:46:42 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:16.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.215 --rc genhtml_branch_coverage=1 00:18:16.215 --rc genhtml_function_coverage=1 00:18:16.215 --rc genhtml_legend=1 00:18:16.215 --rc geninfo_all_blocks=1 00:18:16.215 --rc geninfo_unexecuted_blocks=1 00:18:16.215 00:18:16.215 ' 00:18:16.215 10:46:42 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:16.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.215 --rc genhtml_branch_coverage=1 00:18:16.215 --rc genhtml_function_coverage=1 00:18:16.215 --rc genhtml_legend=1 00:18:16.215 --rc geninfo_all_blocks=1 00:18:16.215 --rc geninfo_unexecuted_blocks=1 00:18:16.215 00:18:16.215 ' 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:16.215 10:46:42 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:16.215 10:46:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89497 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:16.216 10:46:42 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89497 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89497 ']' 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.216 10:46:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.476 [2024-11-18 10:46:42.173716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:16.476 [2024-11-18 10:46:42.173884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89497 ] 00:18:16.476 [2024-11-18 10:46:42.344997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:16.736 [2024-11-18 10:46:42.455586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.736 [2024-11-18 10:46:42.455615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.676 10:46:43 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.676 10:46:43 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:17.676 10:46:43 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:17.676 10:46:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.676 10:46:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.676 10:46:43 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:17.676 10:46:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.676 10:46:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.676 10:46:43 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:17.676 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:17.676 ' 00:18:19.058 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:19.058 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:19.318 10:46:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:19.318 10:46:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.318 10:46:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 10:46:45 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:19.318 10:46:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.318 10:46:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 10:46:45 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:19.318 ' 00:18:20.257 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:20.517 10:46:46 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:20.517 10:46:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.517 10:46:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.517 10:46:46 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:20.517 10:46:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.517 10:46:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.517 10:46:46 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:20.517 10:46:46 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:21.086 10:46:46 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:21.086 10:46:46 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:21.086 10:46:46 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:21.086 10:46:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.086 10:46:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.086 10:46:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:21.086 10:46:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.086 10:46:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.086 10:46:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:21.086 ' 00:18:22.027 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:22.285 10:46:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:22.285 10:46:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.286 10:46:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.286 10:46:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:22.286 10:46:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.286 10:46:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.286 10:46:48 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:22.286 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:22.286 ' 00:18:23.669 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:23.669 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:23.669 10:46:49 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.669 10:46:49 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89497 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89497 ']' 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89497 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.669 10:46:49 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89497 00:18:23.928 killing process with pid 89497 00:18:23.928 10:46:49 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.928 10:46:49 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.928 10:46:49 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89497' 00:18:23.928 10:46:49 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89497 00:18:23.928 10:46:49 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89497 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89497 ']' 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89497 00:18:26.470 10:46:51 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89497 ']' 00:18:26.470 10:46:51 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89497 00:18:26.470 Process with pid 89497 is not found 00:18:26.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89497) - No such process 00:18:26.470 10:46:51 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89497 is not found' 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:26.470 10:46:51 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:26.470 00:18:26.470 real 0m9.952s 00:18:26.470 user 0m20.537s 00:18:26.470 sys 0m1.189s 00:18:26.470 10:46:51 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.470 10:46:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.470 ************************************ 00:18:26.470 END TEST spdkcli_raid 00:18:26.470 ************************************ 00:18:26.470 10:46:51 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:26.470 10:46:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.470 10:46:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.470 10:46:51 -- common/autotest_common.sh@10 -- # set +x 00:18:26.470 ************************************ 00:18:26.470 START TEST blockdev_raid5f 00:18:26.470 ************************************ 00:18:26.470 10:46:51 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:26.470 * Looking for test storage... 00:18:26.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:26.470 10:46:51 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.470 10:46:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.470 10:46:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.470 10:46:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.470 10:46:52 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:26.470 10:46:52 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.470 10:46:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.470 --rc genhtml_branch_coverage=1 00:18:26.470 --rc genhtml_function_coverage=1 00:18:26.470 --rc genhtml_legend=1 00:18:26.470 --rc geninfo_all_blocks=1 00:18:26.470 --rc geninfo_unexecuted_blocks=1 00:18:26.470 00:18:26.470 ' 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.471 --rc genhtml_branch_coverage=1 00:18:26.471 --rc genhtml_function_coverage=1 00:18:26.471 --rc genhtml_legend=1 00:18:26.471 --rc geninfo_all_blocks=1 00:18:26.471 --rc geninfo_unexecuted_blocks=1 00:18:26.471 00:18:26.471 ' 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.471 --rc genhtml_branch_coverage=1 00:18:26.471 --rc genhtml_function_coverage=1 00:18:26.471 --rc genhtml_legend=1 00:18:26.471 --rc geninfo_all_blocks=1 00:18:26.471 --rc geninfo_unexecuted_blocks=1 00:18:26.471 00:18:26.471 ' 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.471 --rc genhtml_branch_coverage=1 00:18:26.471 --rc genhtml_function_coverage=1 00:18:26.471 --rc genhtml_legend=1 00:18:26.471 --rc geninfo_all_blocks=1 00:18:26.471 --rc geninfo_unexecuted_blocks=1 00:18:26.471 00:18:26.471 ' 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89776 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:26.471 10:46:52 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89776 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89776 ']' 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.471 10:46:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:26.471 [2024-11-18 10:46:52.235158] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:26.471 [2024-11-18 10:46:52.235402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89776 ] 00:18:26.731 [2024-11-18 10:46:52.414372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.731 [2024-11-18 10:46:52.518785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.672 Malloc0 00:18:27.672 Malloc1 00:18:27.672 Malloc2 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.672 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.672 10:46:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5b0d7d97-8a4a-4d32-a624-4502f1e57c5e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5b0d7d97-8a4a-4d32-a624-4502f1e57c5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5b0d7d97-8a4a-4d32-a624-4502f1e57c5e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9e6d9e78-a6df-46a0-848a-34884cbbfdfa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "165cbd4c-aabc-444f-8e41-541e425b7a0e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4c1b8d57-b9e9-4c08-a110-12d5de866e9a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:27.933 10:46:53 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89776 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89776 ']' 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89776 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89776 00:18:27.933 killing process with pid 89776 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89776' 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89776 00:18:27.933 10:46:53 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89776 00:18:30.473 10:46:56 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:30.473 10:46:56 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:30.473 10:46:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:30.473 10:46:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.473 10:46:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:30.473 ************************************ 00:18:30.473 START TEST bdev_hello_world 00:18:30.473 ************************************ 00:18:30.473 10:46:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:30.473 [2024-11-18 10:46:56.252370] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:30.473 [2024-11-18 10:46:56.252553] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89839 ] 00:18:30.733 [2024-11-18 10:46:56.427579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.733 [2024-11-18 10:46:56.530444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.302 [2024-11-18 10:46:57.031183] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:31.302 [2024-11-18 10:46:57.031227] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:31.302 [2024-11-18 10:46:57.031242] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:31.302 [2024-11-18 10:46:57.031680] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:31.302 [2024-11-18 10:46:57.031797] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:31.302 [2024-11-18 10:46:57.031812] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:31.302 [2024-11-18 10:46:57.031854] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:31.302 00:18:31.302 [2024-11-18 10:46:57.031878] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:32.684 00:18:32.684 real 0m2.161s 00:18:32.684 user 0m1.785s 00:18:32.684 sys 0m0.255s 00:18:32.684 10:46:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.684 10:46:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:32.684 ************************************ 00:18:32.684 END TEST bdev_hello_world 00:18:32.684 ************************************ 00:18:32.684 10:46:58 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:32.684 10:46:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.684 10:46:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.684 10:46:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:32.685 ************************************ 00:18:32.685 START TEST bdev_bounds 00:18:32.685 ************************************ 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89881 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:32.685 Process bdevio pid: 89881 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89881' 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89881 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89881 ']' 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.685 10:46:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:32.685 [2024-11-18 10:46:58.476517] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:32.685 [2024-11-18 10:46:58.476732] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89881 ] 00:18:32.944 [2024-11-18 10:46:58.655016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.944 [2024-11-18 10:46:58.761930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.944 [2024-11-18 10:46:58.762125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.944 [2024-11-18 10:46:58.762128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.515 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.515 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:33.515 10:46:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:33.515 I/O targets: 00:18:33.515 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:33.515 00:18:33.515 00:18:33.515 CUnit - A unit testing framework for C - Version 2.1-3 00:18:33.515 http://cunit.sourceforge.net/ 00:18:33.515 00:18:33.515 00:18:33.515 Suite: bdevio tests on: raid5f 00:18:33.515 Test: blockdev write read block ...passed 00:18:33.515 Test: blockdev write zeroes read block ...passed 00:18:33.775 Test: blockdev write zeroes read no split ...passed 00:18:33.775 Test: blockdev write zeroes read split ...passed 00:18:33.775 Test: blockdev write zeroes read split partial ...passed 00:18:33.775 Test: blockdev reset ...passed 00:18:33.775 Test: blockdev write read 8 blocks ...passed 00:18:33.775 Test: blockdev write read size > 128k ...passed 00:18:33.775 Test: blockdev write read invalid size ...passed 00:18:33.775 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:33.775 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:33.775 Test: blockdev write read max offset ...passed 00:18:33.775 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:33.775 Test: blockdev writev readv 8 blocks ...passed 00:18:33.775 Test: blockdev writev readv 30 x 1block ...passed 00:18:33.775 Test: blockdev writev readv block ...passed 00:18:33.775 Test: blockdev writev readv size > 128k ...passed 00:18:33.775 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:33.775 Test: blockdev comparev and writev ...passed 00:18:33.775 Test: blockdev nvme passthru rw ...passed 00:18:33.775 Test: blockdev nvme passthru vendor specific ...passed 00:18:33.775 Test: blockdev nvme admin passthru ...passed 00:18:33.775 Test: blockdev copy ...passed 00:18:33.775 00:18:33.775 Run Summary: Type Total Ran Passed Failed Inactive 00:18:33.775 suites 1 1 n/a 0 0 00:18:33.775 tests 23 23 23 0 0 00:18:33.775 asserts 130 130 130 0 n/a 00:18:33.775 00:18:33.775 Elapsed time = 0.594 seconds 00:18:33.775 0 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89881 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89881 ']' 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89881 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89881 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89881' 00:18:34.035 killing process with pid 89881 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89881 00:18:34.035 10:46:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89881 00:18:35.418 10:47:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:35.418 00:18:35.418 real 0m2.621s 00:18:35.418 user 0m6.455s 00:18:35.418 sys 0m0.393s 00:18:35.418 10:47:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.418 10:47:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:35.418 ************************************ 00:18:35.418 END TEST bdev_bounds 00:18:35.418 ************************************ 00:18:35.418 10:47:01 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:35.418 10:47:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:35.418 10:47:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.418 10:47:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.418 ************************************ 00:18:35.418 START TEST bdev_nbd 00:18:35.418 ************************************ 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89941 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89941 /var/tmp/spdk-nbd.sock 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89941 ']' 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:35.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.418 10:47:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:35.418 [2024-11-18 10:47:01.186643] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:35.418 [2024-11-18 10:47:01.187302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.678 [2024-11-18 10:47:01.368708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.678 [2024-11-18 10:47:01.468715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:36.276 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.536 1+0 records in 00:18:36.536 1+0 records out 00:18:36.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608862 s, 6.7 MB/s 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:36.536 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:36.796 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:36.796 { 00:18:36.796 "nbd_device": "/dev/nbd0", 00:18:36.796 "bdev_name": "raid5f" 00:18:36.796 } 00:18:36.796 ]' 00:18:36.796 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:36.796 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:36.796 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:36.796 { 00:18:36.797 "nbd_device": "/dev/nbd0", 00:18:36.797 "bdev_name": "raid5f" 00:18:36.797 } 00:18:36.797 ]' 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.797 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:37.058 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.317 10:47:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:37.317 /dev/nbd0 00:18:37.576 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:37.576 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:37.576 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.577 1+0 records in 00:18:37.577 1+0 records out 00:18:37.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047728 s, 8.6 MB/s 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:37.577 { 00:18:37.577 "nbd_device": "/dev/nbd0", 00:18:37.577 "bdev_name": "raid5f" 00:18:37.577 } 00:18:37.577 ]' 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:37.577 { 00:18:37.577 "nbd_device": "/dev/nbd0", 00:18:37.577 "bdev_name": "raid5f" 00:18:37.577 } 00:18:37.577 ]' 00:18:37.577 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:37.837 256+0 records in 00:18:37.837 256+0 records out 00:18:37.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442903 s, 237 MB/s 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:37.837 256+0 records in 00:18:37.837 256+0 records out 00:18:37.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307392 s, 34.1 MB/s 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.837 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:38.096 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.097 10:47:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:38.356 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:38.357 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:38.617 malloc_lvol_verify 00:18:38.617 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:38.617 76a0a98d-75ce-4ad0-a691-9011c7de839c 00:18:38.617 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:38.877 4c1978b4-31ca-4b20-a1a3-ef2aede8b3f1 00:18:38.877 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:39.138 /dev/nbd0 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:39.138 mke2fs 1.47.0 (5-Feb-2023) 00:18:39.138 Discarding device blocks: 0/4096 done 00:18:39.138 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:39.138 00:18:39.138 Allocating group tables: 0/1 done 00:18:39.138 Writing inode tables: 0/1 done 00:18:39.138 Creating journal (1024 blocks): done 00:18:39.138 Writing superblocks and filesystem accounting information: 0/1 done 00:18:39.138 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.138 10:47:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89941 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89941 ']' 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89941 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:39.398 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.399 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89941 00:18:39.399 killing process with pid 89941 00:18:39.399 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.399 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.399 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89941' 00:18:39.399 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89941 00:18:39.399 10:47:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89941 00:18:40.782 10:47:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:40.782 ************************************ 00:18:40.782 END TEST bdev_nbd 00:18:40.782 ************************************ 00:18:40.782 00:18:40.782 real 0m5.584s 00:18:40.782 user 0m7.485s 00:18:40.782 sys 0m1.279s 00:18:40.782 10:47:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.782 10:47:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:41.042 10:47:06 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:41.042 10:47:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:41.042 10:47:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:41.042 10:47:06 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:41.042 10:47:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.042 10:47:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.042 10:47:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:41.042 ************************************ 00:18:41.042 START TEST bdev_fio 00:18:41.042 ************************************ 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:41.042 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:41.042 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:41.043 ************************************ 00:18:41.043 START TEST bdev_fio_rw_verify 00:18:41.043 ************************************ 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:41.043 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:41.303 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:41.303 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:41.303 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:41.303 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:41.303 10:47:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:41.303 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:41.303 fio-3.35 00:18:41.303 Starting 1 thread 00:18:53.527 00:18:53.527 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90147: Mon Nov 18 10:47:18 2024 00:18:53.527 read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(490MiB/10001msec) 00:18:53.527 slat (nsec): min=16735, max=71560, avg=18677.59, stdev=1961.66 00:18:53.527 clat (usec): min=10, max=392, avg=127.18, stdev=44.38 00:18:53.527 lat (usec): min=29, max=424, avg=145.85, stdev=44.61 00:18:53.527 clat percentiles (usec): 00:18:53.527 | 50.000th=[ 131], 99.000th=[ 212], 99.900th=[ 241], 99.990th=[ 293], 00:18:53.527 | 99.999th=[ 363] 00:18:53.527 write: IOPS=13.2k, BW=51.4MiB/s (53.9MB/s)(508MiB/9877msec); 0 zone resets 00:18:53.527 slat (usec): min=7, max=253, avg=15.96, stdev= 3.71 00:18:53.527 clat (usec): min=57, max=1459, avg=294.88, stdev=40.76 00:18:53.527 lat (usec): min=72, max=1541, avg=310.84, stdev=41.83 00:18:53.527 clat percentiles (usec): 00:18:53.527 | 50.000th=[ 302], 99.000th=[ 379], 99.900th=[ 537], 99.990th=[ 1106], 00:18:53.527 | 99.999th=[ 1385] 00:18:53.527 bw ( KiB/s): min=48568, max=54800, per=98.58%, avg=51912.00, stdev=1643.44, samples=19 00:18:53.527 iops : min=12142, max=13700, avg=12978.00, stdev=410.86, samples=19 00:18:53.527 lat (usec) : 20=0.01%, 50=0.01%, 100=16.63%, 250=38.96%, 500=44.34% 00:18:53.527 lat (usec) : 750=0.04%, 1000=0.01% 00:18:53.527 lat (msec) : 2=0.01% 00:18:53.527 cpu : usr=98.80%, sys=0.52%, ctx=20, majf=0, minf=10266 00:18:53.527 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.527 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.527 issued rwts: total=125451,130035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.527 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:53.527 00:18:53.527 Run status group 0 (all jobs): 00:18:53.527 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=490MiB (514MB), run=10001-10001msec 00:18:53.527 WRITE: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=508MiB (533MB), run=9877-9877msec 00:18:53.786 ----------------------------------------------------- 00:18:53.786 Suppressions used: 00:18:53.786 count bytes template 00:18:53.786 1 7 /usr/src/fio/parse.c 00:18:53.786 683 65568 /usr/src/fio/iolog.c 00:18:53.786 1 8 libtcmalloc_minimal.so 00:18:53.786 1 904 libcrypto.so 00:18:53.786 ----------------------------------------------------- 00:18:53.786 00:18:54.045 00:18:54.045 real 0m12.806s 00:18:54.045 user 0m12.978s 00:18:54.045 sys 0m0.852s 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.045 ************************************ 00:18:54.045 END TEST bdev_fio_rw_verify 00:18:54.045 ************************************ 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5b0d7d97-8a4a-4d32-a624-4502f1e57c5e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5b0d7d97-8a4a-4d32-a624-4502f1e57c5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5b0d7d97-8a4a-4d32-a624-4502f1e57c5e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9e6d9e78-a6df-46a0-848a-34884cbbfdfa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "165cbd4c-aabc-444f-8e41-541e425b7a0e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4c1b8d57-b9e9-4c08-a110-12d5de866e9a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:54.045 /home/vagrant/spdk_repo/spdk 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:54.045 00:18:54.045 real 0m13.094s 00:18:54.045 user 0m13.091s 00:18:54.045 sys 0m0.996s 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.045 10:47:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:54.045 ************************************ 00:18:54.045 END TEST bdev_fio 00:18:54.045 ************************************ 00:18:54.045 10:47:19 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:54.045 10:47:19 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:54.045 10:47:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:54.045 10:47:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.045 10:47:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.045 ************************************ 00:18:54.045 START TEST bdev_verify 00:18:54.045 ************************************ 00:18:54.046 10:47:19 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:54.305 [2024-11-18 10:47:20.005582] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:54.305 [2024-11-18 10:47:20.005700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90310 ] 00:18:54.305 [2024-11-18 10:47:20.185639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:54.565 [2024-11-18 10:47:20.324250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.565 [2024-11-18 10:47:20.324282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.134 Running I/O for 5 seconds... 00:18:57.061 10731.00 IOPS, 41.92 MiB/s [2024-11-18T10:47:24.327Z] 10842.00 IOPS, 42.35 MiB/s [2024-11-18T10:47:25.266Z] 10809.00 IOPS, 42.22 MiB/s [2024-11-18T10:47:26.206Z] 10818.75 IOPS, 42.26 MiB/s [2024-11-18T10:47:26.206Z] 10844.00 IOPS, 42.36 MiB/s 00:19:00.321 Latency(us) 00:19:00.321 [2024-11-18T10:47:26.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.321 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.321 Verification LBA range: start 0x0 length 0x2000 00:19:00.321 raid5f : 5.02 6485.02 25.33 0.00 0.00 29744.91 103.74 22551.25 00:19:00.321 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.321 Verification LBA range: start 0x2000 length 0x2000 00:19:00.321 raid5f : 5.01 4351.75 17.00 0.00 0.00 44229.35 302.28 30907.81 00:19:00.321 [2024-11-18T10:47:26.206Z] =================================================================================================================== 00:19:00.321 [2024-11-18T10:47:26.206Z] Total : 10836.77 42.33 0.00 0.00 35558.29 103.74 30907.81 00:19:01.703 00:19:01.703 real 0m7.484s 00:19:01.703 user 0m13.715s 00:19:01.703 sys 0m0.380s 00:19:01.703 10:47:27 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.703 ************************************ 00:19:01.703 END TEST bdev_verify 00:19:01.703 ************************************ 00:19:01.703 10:47:27 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:01.703 10:47:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:01.703 10:47:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:01.703 10:47:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.703 10:47:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:01.703 ************************************ 00:19:01.703 START TEST bdev_verify_big_io 00:19:01.703 ************************************ 00:19:01.703 10:47:27 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:01.703 [2024-11-18 10:47:27.575930] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:01.703 [2024-11-18 10:47:27.576125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90405 ] 00:19:01.967 [2024-11-18 10:47:27.763556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.241 [2024-11-18 10:47:27.901508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.241 [2024-11-18 10:47:27.901537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.821 Running I/O for 5 seconds... 00:19:04.696 633.00 IOPS, 39.56 MiB/s [2024-11-18T10:47:31.960Z] 760.00 IOPS, 47.50 MiB/s [2024-11-18T10:47:32.897Z] 761.33 IOPS, 47.58 MiB/s [2024-11-18T10:47:33.835Z] 793.25 IOPS, 49.58 MiB/s [2024-11-18T10:47:34.094Z] 799.00 IOPS, 49.94 MiB/s 00:19:08.209 Latency(us) 00:19:08.209 [2024-11-18T10:47:34.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.209 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:08.209 Verification LBA range: start 0x0 length 0x200 00:19:08.209 raid5f : 5.25 447.43 27.96 0.00 0.00 7099633.96 200.33 316862.27 00:19:08.209 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:08.209 Verification LBA range: start 0x200 length 0x200 00:19:08.209 raid5f : 5.35 355.50 22.22 0.00 0.00 8928565.61 206.59 388293.65 00:19:08.209 [2024-11-18T10:47:34.094Z] =================================================================================================================== 00:19:08.209 [2024-11-18T10:47:34.094Z] Total : 802.93 50.18 0.00 0.00 7918372.36 200.33 388293.65 00:19:09.589 00:19:09.589 real 0m7.852s 00:19:09.589 user 0m14.435s 00:19:09.589 sys 0m0.375s 00:19:09.589 ************************************ 00:19:09.589 END TEST bdev_verify_big_io 00:19:09.589 ************************************ 00:19:09.589 10:47:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.589 10:47:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.589 10:47:35 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:09.589 10:47:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:09.589 10:47:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.589 10:47:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.589 ************************************ 00:19:09.589 START TEST bdev_write_zeroes 00:19:09.589 ************************************ 00:19:09.589 10:47:35 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:09.849 [2024-11-18 10:47:35.485777] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:09.849 [2024-11-18 10:47:35.485968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90510 ] 00:19:09.849 [2024-11-18 10:47:35.657658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.108 [2024-11-18 10:47:35.796739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.677 Running I/O for 1 seconds... 00:19:11.614 29919.00 IOPS, 116.87 MiB/s 00:19:11.614 Latency(us) 00:19:11.614 [2024-11-18T10:47:37.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.614 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:11.614 raid5f : 1.01 29896.94 116.78 0.00 0.00 4268.32 1366.53 5924.00 00:19:11.614 [2024-11-18T10:47:37.499Z] =================================================================================================================== 00:19:11.614 [2024-11-18T10:47:37.499Z] Total : 29896.94 116.78 0.00 0.00 4268.32 1366.53 5924.00 00:19:12.994 00:19:12.994 real 0m3.448s 00:19:12.995 user 0m2.980s 00:19:12.995 sys 0m0.342s 00:19:12.995 10:47:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.995 10:47:38 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:12.995 ************************************ 00:19:12.995 END TEST bdev_write_zeroes 00:19:12.995 ************************************ 00:19:13.254 10:47:38 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:13.254 10:47:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:13.254 10:47:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.254 10:47:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.254 ************************************ 00:19:13.254 START TEST bdev_json_nonenclosed 00:19:13.254 ************************************ 00:19:13.254 10:47:38 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:13.254 [2024-11-18 10:47:39.001756] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:13.254 [2024-11-18 10:47:39.001868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90569 ] 00:19:13.525 [2024-11-18 10:47:39.173843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.525 [2024-11-18 10:47:39.309820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.525 [2024-11-18 10:47:39.310018] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:13.525 [2024-11-18 10:47:39.310055] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:13.525 [2024-11-18 10:47:39.310066] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:13.784 ************************************ 00:19:13.784 00:19:13.784 real 0m0.654s 00:19:13.784 user 0m0.409s 00:19:13.784 sys 0m0.139s 00:19:13.784 10:47:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.784 10:47:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:13.784 END TEST bdev_json_nonenclosed 00:19:13.784 ************************************ 00:19:13.784 10:47:39 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:13.784 10:47:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:13.784 10:47:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.784 10:47:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.784 ************************************ 00:19:13.784 START TEST bdev_json_nonarray 00:19:13.784 ************************************ 00:19:13.784 10:47:39 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.044 [2024-11-18 10:47:39.738523] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:14.044 [2024-11-18 10:47:39.738712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90594 ] 00:19:14.044 [2024-11-18 10:47:39.911236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.305 [2024-11-18 10:47:40.048675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.305 [2024-11-18 10:47:40.048927] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:14.305 [2024-11-18 10:47:40.049007] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:14.305 [2024-11-18 10:47:40.049073] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.565 00:19:14.565 real 0m0.663s 00:19:14.565 user 0m0.402s 00:19:14.565 sys 0m0.155s 00:19:14.565 10:47:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.565 10:47:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:14.565 ************************************ 00:19:14.565 END TEST bdev_json_nonarray 00:19:14.565 ************************************ 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:14.565 10:47:40 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:14.565 00:19:14.565 real 0m48.528s 00:19:14.565 user 1m5.054s 00:19:14.565 sys 0m5.461s 00:19:14.565 ************************************ 00:19:14.565 END TEST blockdev_raid5f 00:19:14.565 ************************************ 00:19:14.565 10:47:40 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.565 10:47:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:14.565 10:47:40 -- spdk/autotest.sh@194 -- # uname -s 00:19:14.565 10:47:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:14.565 10:47:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:14.565 10:47:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:14.565 10:47:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:14.565 10:47:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:14.565 10:47:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:14.565 10:47:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.565 10:47:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.825 10:47:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:14.825 10:47:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:14.825 10:47:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:14.825 10:47:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:14.825 10:47:40 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:14.825 10:47:40 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:14.825 10:47:40 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:14.825 10:47:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.825 10:47:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.825 10:47:40 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:14.825 10:47:40 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:14.825 10:47:40 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:14.825 10:47:40 -- common/autotest_common.sh@10 -- # set +x 00:19:17.368 INFO: APP EXITING 00:19:17.368 INFO: killing all VMs 00:19:17.368 INFO: killing vhost app 00:19:17.368 INFO: EXIT DONE 00:19:17.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:17.627 Waiting for block devices as requested 00:19:17.627 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:17.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:18.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:18.828 Cleaning 00:19:18.828 Removing: /var/run/dpdk/spdk0/config 00:19:18.828 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:18.828 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:18.828 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:18.828 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:18.828 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:18.828 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:18.828 Removing: /dev/shm/spdk_tgt_trace.pid56788 00:19:18.828 Removing: /var/run/dpdk/spdk0 00:19:18.828 Removing: /var/run/dpdk/spdk_pid56542 00:19:18.828 Removing: /var/run/dpdk/spdk_pid56788 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57017 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57121 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57177 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57316 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57334 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57544 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57656 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57768 00:19:18.828 Removing: /var/run/dpdk/spdk_pid57896 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58004 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58043 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58080 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58156 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58283 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58726 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58801 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58875 00:19:18.828 Removing: /var/run/dpdk/spdk_pid58896 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59053 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59069 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59225 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59241 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59316 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59334 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59398 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59427 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59622 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59664 00:19:18.828 Removing: /var/run/dpdk/spdk_pid59752 00:19:18.828 Removing: /var/run/dpdk/spdk_pid61106 00:19:18.828 Removing: /var/run/dpdk/spdk_pid61312 00:19:18.828 Removing: /var/run/dpdk/spdk_pid61452 00:19:18.828 Removing: /var/run/dpdk/spdk_pid62101 00:19:18.828 Removing: /var/run/dpdk/spdk_pid62307 00:19:18.828 Removing: /var/run/dpdk/spdk_pid62447 00:19:18.829 Removing: /var/run/dpdk/spdk_pid63096 00:19:18.829 Removing: /var/run/dpdk/spdk_pid63420 00:19:18.829 Removing: /var/run/dpdk/spdk_pid63566 00:19:18.829 Removing: /var/run/dpdk/spdk_pid64946 00:19:18.829 Removing: /var/run/dpdk/spdk_pid65199 00:19:19.089 Removing: /var/run/dpdk/spdk_pid65345 00:19:19.089 Removing: /var/run/dpdk/spdk_pid66725 00:19:19.089 Removing: /var/run/dpdk/spdk_pid66978 00:19:19.089 Removing: /var/run/dpdk/spdk_pid67129 00:19:19.089 Removing: /var/run/dpdk/spdk_pid68514 00:19:19.089 Removing: /var/run/dpdk/spdk_pid68960 00:19:19.089 Removing: /var/run/dpdk/spdk_pid69100 00:19:19.089 Removing: /var/run/dpdk/spdk_pid70580 00:19:19.089 Removing: /var/run/dpdk/spdk_pid70840 00:19:19.089 Removing: /var/run/dpdk/spdk_pid70991 00:19:19.089 Removing: /var/run/dpdk/spdk_pid72476 00:19:19.089 Removing: /var/run/dpdk/spdk_pid72748 00:19:19.089 Removing: /var/run/dpdk/spdk_pid72894 00:19:19.089 Removing: /var/run/dpdk/spdk_pid74374 00:19:19.089 Removing: /var/run/dpdk/spdk_pid74861 00:19:19.089 Removing: /var/run/dpdk/spdk_pid75001 00:19:19.089 Removing: /var/run/dpdk/spdk_pid75150 00:19:19.089 Removing: /var/run/dpdk/spdk_pid75577 00:19:19.089 Removing: /var/run/dpdk/spdk_pid76302 00:19:19.089 Removing: /var/run/dpdk/spdk_pid76678 00:19:19.089 Removing: /var/run/dpdk/spdk_pid77361 00:19:19.089 Removing: /var/run/dpdk/spdk_pid77802 00:19:19.089 Removing: /var/run/dpdk/spdk_pid78550 00:19:19.089 Removing: /var/run/dpdk/spdk_pid78959 00:19:19.089 Removing: /var/run/dpdk/spdk_pid80928 00:19:19.089 Removing: /var/run/dpdk/spdk_pid81366 00:19:19.089 Removing: /var/run/dpdk/spdk_pid81804 00:19:19.089 Removing: /var/run/dpdk/spdk_pid83909 00:19:19.089 Removing: /var/run/dpdk/spdk_pid84395 00:19:19.089 Removing: /var/run/dpdk/spdk_pid84913 00:19:19.089 Removing: /var/run/dpdk/spdk_pid85977 00:19:19.089 Removing: /var/run/dpdk/spdk_pid86304 00:19:19.089 Removing: /var/run/dpdk/spdk_pid87237 00:19:19.089 Removing: /var/run/dpdk/spdk_pid87565 00:19:19.089 Removing: /var/run/dpdk/spdk_pid88498 00:19:19.089 Removing: /var/run/dpdk/spdk_pid88821 00:19:19.089 Removing: /var/run/dpdk/spdk_pid89497 00:19:19.089 Removing: /var/run/dpdk/spdk_pid89776 00:19:19.089 Removing: /var/run/dpdk/spdk_pid89839 00:19:19.089 Removing: /var/run/dpdk/spdk_pid89881 00:19:19.089 Removing: /var/run/dpdk/spdk_pid90132 00:19:19.089 Removing: /var/run/dpdk/spdk_pid90310 00:19:19.089 Removing: /var/run/dpdk/spdk_pid90405 00:19:19.089 Removing: /var/run/dpdk/spdk_pid90510 00:19:19.089 Removing: /var/run/dpdk/spdk_pid90569 00:19:19.089 Removing: /var/run/dpdk/spdk_pid90594 00:19:19.089 Clean 00:19:19.350 10:47:44 -- common/autotest_common.sh@1453 -- # return 0 00:19:19.350 10:47:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:19.350 10:47:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.350 10:47:44 -- common/autotest_common.sh@10 -- # set +x 00:19:19.350 10:47:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:19.350 10:47:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.350 10:47:45 -- common/autotest_common.sh@10 -- # set +x 00:19:19.350 10:47:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:19.350 10:47:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:19.350 10:47:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:19.350 10:47:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:19.350 10:47:45 -- spdk/autotest.sh@398 -- # hostname 00:19:19.350 10:47:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:19.609 geninfo: WARNING: invalid characters removed from testname! 00:19:46.245 10:48:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:47.185 10:48:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:49.097 10:48:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:51.006 10:48:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:52.917 10:48:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:54.831 10:48:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:56.741 10:48:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:56.741 10:48:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:56.741 10:48:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:56.741 10:48:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:56.741 10:48:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:56.741 10:48:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:56.741 + [[ -n 5425 ]] 00:19:56.741 + sudo kill 5425 00:19:56.752 [Pipeline] } 00:19:56.769 [Pipeline] // timeout 00:19:56.774 [Pipeline] } 00:19:56.788 [Pipeline] // stage 00:19:56.794 [Pipeline] } 00:19:56.808 [Pipeline] // catchError 00:19:56.818 [Pipeline] stage 00:19:56.820 [Pipeline] { (Stop VM) 00:19:56.833 [Pipeline] sh 00:19:57.117 + vagrant halt 00:19:59.657 ==> default: Halting domain... 00:20:07.803 [Pipeline] sh 00:20:08.130 + vagrant destroy -f 00:20:10.680 ==> default: Removing domain... 00:20:10.694 [Pipeline] sh 00:20:10.979 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:10.990 [Pipeline] } 00:20:11.005 [Pipeline] // stage 00:20:11.010 [Pipeline] } 00:20:11.025 [Pipeline] // dir 00:20:11.031 [Pipeline] } 00:20:11.046 [Pipeline] // wrap 00:20:11.052 [Pipeline] } 00:20:11.066 [Pipeline] // catchError 00:20:11.076 [Pipeline] stage 00:20:11.078 [Pipeline] { (Epilogue) 00:20:11.092 [Pipeline] sh 00:20:11.377 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:15.644 [Pipeline] catchError 00:20:15.645 [Pipeline] { 00:20:15.655 [Pipeline] sh 00:20:15.937 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:15.937 Artifacts sizes are good 00:20:15.947 [Pipeline] } 00:20:15.961 [Pipeline] // catchError 00:20:15.973 [Pipeline] archiveArtifacts 00:20:15.980 Archiving artifacts 00:20:16.076 [Pipeline] cleanWs 00:20:16.088 [WS-CLEANUP] Deleting project workspace... 00:20:16.088 [WS-CLEANUP] Deferred wipeout is used... 00:20:16.095 [WS-CLEANUP] done 00:20:16.097 [Pipeline] } 00:20:16.114 [Pipeline] // stage 00:20:16.120 [Pipeline] } 00:20:16.136 [Pipeline] // node 00:20:16.143 [Pipeline] End of Pipeline 00:20:16.184 Finished: SUCCESS